hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cb580bfb1fd1002884871b871de5e18f3b0e3528 | 3,460 | ipynb | Jupyter Notebook | HW0.ipynb | iHamidHasani/HW0 | 66dc1a7c480c4f3bee194f3acc55ee50253cbc54 | [
"MIT"
] | null | null | null | HW0.ipynb | iHamidHasani/HW0 | 66dc1a7c480c4f3bee194f3acc55ee50253cbc54 | [
"MIT"
] | null | null | null | HW0.ipynb | iHamidHasani/HW0 | 66dc1a7c480c4f3bee194f3acc55ee50253cbc54 | [
"MIT"
] | null | null | null | 30.892857 | 342 | 0.565029 | [
[
[
"# Importing packages\n\nimport numpy as np # Numpy is doing linear algebra and all the math for us\nimport random # Random to create random numbers\n\n# Information on autoreload which will be important to update your modules \n# when you change them so that they work in the jupyter notebook\n# https://ipython.org/ipython-doc/3/config/extensions/autoreload.html\n%load_ext autoreload\n%autoreload 2\n#%reload_ext autoreload",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"#import rawpy\nimport numpy as np\nimport glob\nimport cv2\nfrom PIL import Image\nimport os\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow",
"_____no_output_____"
],
[
"import src.code as code\n",
"_____no_output_____"
],
[
"code.sum_numbers(1,2)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
cb580e7f84bd9f82b5137c551cd6782535852bb9 | 7,206 | ipynb | Jupyter Notebook | jupyter/BERTQA.ipynb | michaellavelle/djl | 468b29490b94f8b8dc38f7f7237a119884c25ff6 | [
"Apache-2.0"
] | 1 | 2020-09-18T04:29:36.000Z | 2020-09-18T04:29:36.000Z | jupyter/BERTQA.ipynb | michaellavelle/djl | 468b29490b94f8b8dc38f7f7237a119884c25ff6 | [
"Apache-2.0"
] | null | null | null | jupyter/BERTQA.ipynb | michaellavelle/djl | 468b29490b94f8b8dc38f7f7237a119884c25ff6 | [
"Apache-2.0"
] | null | null | null | 32.169643 | 229 | 0.607272 | [
[
[
"# DJL BERT Inference Demo\n\n## Introduction\n\nIn this tutorial, you walk through running inference using DJL on a [BERT](https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270) QA model trained with MXNet and PyTorch. \nYou can provide a question and a paragraph containing the answer to the model. The model is then able to find the best answer from the answer paragraph.\n\nExample:\n```text\nQ: When did BBC Japan start broadcasting?\n```\n\nAnswer paragraph:\n```text\nBBC Japan was a general entertainment channel, which operated between December 2004 and April 2006.\nIt ceased operations after its Japanese distributor folded.\n```\nAnd it picked the right answer:\n```text\nA: December 2004\n```\n\nOne of the most powerful features of DJL is that it's engine agnostic. Because of this, you can run different backend engines seamlessly. We showcase BERT QA first with an MXNet pre-trained model, then with a PyTorch model.",
"_____no_output_____"
],
[
"## Preparation\n\nThis tutorial requires the installation of Java Kernel. To install the Java Kernel, see the [README](https://github.com/awslabs/djl/blob/master/jupyter/README.md).",
"_____no_output_____"
]
],
[
[
"// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/\n\n%maven ai.djl:api:0.6.0\n%maven ai.djl.mxnet:mxnet-engine:0.6.0\n%maven ai.djl.mxnet:mxnet-model-zoo:0.6.0\n%maven ai.djl.pytorch:pytorch-engine:0.6.0\n%maven ai.djl.pytorch:pytorch-model-zoo:0.6.0\n%maven org.slf4j:slf4j-api:1.7.26\n%maven org.slf4j:slf4j-simple:1.7.26\n%maven net.java.dev.jna:jna:5.3.0\n\n// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md\n// and See https://github.com/awslabs/djl/blob/master/pytorch/pytorch-engine/README.md\n// for more engine library selection options\n%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-b\n%maven ai.djl.pytorch:pytorch-native-auto:1.5.0",
"_____no_output_____"
]
],
[
[
"### Import java packages by running the following:",
"_____no_output_____"
]
],
[
[
"import ai.djl.*;\nimport ai.djl.engine.*;\nimport ai.djl.modality.nlp.qa.*;\nimport ai.djl.repository.zoo.*;\nimport ai.djl.training.util.*;\nimport ai.djl.inference.*;\nimport ai.djl.repository.zoo.*;",
"_____no_output_____"
]
],
[
[
"Now that all of the prerequisites are complete, start writing code to run inference with this example.\n\n\n## Load the model and input\n\n**First, load the input**",
"_____no_output_____"
]
],
[
[
"var question = \"When did BBC Japan start broadcasting?\";\nvar resourceDocument = \"BBC Japan was a general entertainment Channel.\\n\" +\n \"Which operated between December 2004 and April 2006.\\n\" +\n \"It ceased operations after its Japanese distributor folded.\";\n\nQAInput input = new QAInput(question, resourceDocument);",
"_____no_output_____"
]
],
[
[
"Then load the model and vocabulary. Create a variable `model` by using the `ModelZoo` as shown in the following code.",
"_____no_output_____"
]
],
[
[
"Criteria<QAInput, String> criteria = Criteria.builder()\n .optApplication(Application.NLP.QUESTION_ANSWER)\n .setTypes(QAInput.class, String.class)\n .optFilter(\"backbone\", \"bert\")\n .optEngine(\"MXNet\") // For DJL to use MXNet engine\n .optProgress(new ProgressBar()).build();\nZooModel<QAInput, String> model = ModelZoo.loadModel(criteria);",
"_____no_output_____"
]
],
[
[
"## Run inference\nOnce the model is loaded, you can call `Predictor` and run inference as follows",
"_____no_output_____"
]
],
[
[
"Predictor<QAInput, String> predictor = model.newPredictor();\nString answer = predictor.predict(input);\nanswer",
"_____no_output_____"
]
],
[
[
"Running inference on DJL is that easy. Now, let's try the PyTorch engine by specifying PyTorch engine in Criteria.optEngine(\"PyTorch\"). Let's rerun the inference code.",
"_____no_output_____"
]
],
[
[
"var question = \"When did BBC Japan start broadcasting?\";\nvar resourceDocument = \"BBC Japan was a general entertainment Channel.\\n\" +\n \"Which operated between December 2004 and April 2006.\\n\" +\n \"It ceased operations after its Japanese distributor folded.\";\n\nQAInput input = new QAInput(question, resourceDocument);\n\nCriteria<QAInput, String> criteria = Criteria.builder()\n .optApplication(Application.NLP.QUESTION_ANSWER)\n .setTypes(QAInput.class, String.class)\n .optFilter(\"backbone\", \"bert\")\n .optEngine(\"PyTorch\") // Use PyTorch engine\n .optProgress(new ProgressBar()).build();\nZooModel<QAInput, String> model = ModelZoo.loadModel(criteria);\nPredictor<QAInput, String> predictor = model.newPredictor();\nString answer = predictor.predict(input);\nanswer",
"_____no_output_____"
]
],
[
[
"## Summary\nSuprisingly, there are no differences between the PyTorch code snippet and MXNet code snippet. \nThis is power of DJL. We define a unified API where you can switch to different backend engines on the fly.\nNext chapter: Inference with your own BERT: [MXNet](mxnet/load_your_own_mxnet_bert.ipynb) [PyTorch](pytorch/load_your_own_pytorch_bert.ipynb).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb580f436d9c62727f56429e3b78d1f33f7b4fb0 | 198,533 | ipynb | Jupyter Notebook | quests/serverlessml/06_feateng_keras/taxifare_fc.ipynb | SDRLurker/training-data-analyst | c4c7778e124eccb54f1a6dc57397a591ff8d6398 | [
"Apache-2.0"
] | 1 | 2019-07-21T15:13:34.000Z | 2019-07-21T15:13:34.000Z | quests/serverlessml/06_feateng_keras/taxifare_fc.ipynb | SDRLurker/training-data-analyst | c4c7778e124eccb54f1a6dc57397a591ff8d6398 | [
"Apache-2.0"
] | null | null | null | quests/serverlessml/06_feateng_keras/taxifare_fc.ipynb | SDRLurker/training-data-analyst | c4c7778e124eccb54f1a6dc57397a591ff8d6398 | [
"Apache-2.0"
] | 1 | 2022-02-27T20:24:23.000Z | 2022-02-27T20:24:23.000Z | 254.529487 | 130,144 | 0.896597 | [
[
[
"# Feature Engineering in Keras.\n\nLet's start off with the Python imports that we need.",
"_____no_output_____"
]
],
[
[
"import os, json, math, shutil\nimport numpy as np\nimport tensorflow as tf\nprint(tf.__version__)",
"2.0.0-dev20190629\n"
],
[
"# Note that this cell is special. It's got a tag (you can view tags by clicking on the wrench icon on the left menu in Jupyter)\n# These are parameters that we will configure so that we can schedule this notebook\nDATADIR = '../data'\nOUTDIR = './trained_model'\nEXPORT_DIR = os.path.join(OUTDIR,'export/savedmodel')\nNBUCKETS = 10 # for feature crossing\nTRAIN_BATCH_SIZE = 32\nNUM_TRAIN_EXAMPLES = 10000 * 5 # remember the training dataset repeats, so this will wrap around\nNUM_EVALS = 5 # evaluate this many times\nNUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but no so much that it slows down",
"_____no_output_____"
]
],
[
[
"## Locating the CSV files\n\nWe will start with the CSV files that we wrote out in the [first notebook](../01_explore/taxifare.iypnb) of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data",
"_____no_output_____"
]
],
[
[
"if DATADIR[:5] == 'gs://':\n !gsutil ls $DATADIR/*.csv\nelse:\n !ls -l $DATADIR/*.csv",
"-rw-r--r-- 1 jupyter jupyter 126266 Jun 3 15:48 ../data/taxi-test.csv\n-rw-r--r-- 1 jupyter jupyter 593612 Jun 3 15:48 ../data/taxi-train.csv\n-rw-r--r-- 1 jupyter jupyter 126833 Jun 3 15:48 ../data/taxi-valid.csv\n"
]
],
[
[
"## Use tf.data to read the CSV files\n\nWe wrote these cells in the [third notebook](../03_tfdata/input_pipeline.ipynb) of this sequence.",
"_____no_output_____"
]
],
[
[
"CSV_COLUMNS = ['fare_amount', 'pickup_datetime',\n 'pickup_longitude', 'pickup_latitude', \n 'dropoff_longitude', 'dropoff_latitude', \n 'passenger_count', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]",
"_____no_output_____"
],
[
"def features_and_labels(row_data):\n for unwanted_col in ['key']: # keep the pickup_datetime!\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label # features, label\n\n# load the training data\ndef load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n pattern = '{}/{}'.format(DATADIR, pattern)\n dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n .map(features_and_labels) # features, label\n .cache())\n if mode == tf.estimator.ModeKeys.TRAIN:\n print(\"Repeating training dataset indefinitely\")\n dataset = dataset.shuffle(1000).repeat()\n dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE\n return dataset",
"_____no_output_____"
],
[
"import datetime\n# Python 3.5 doesn't handle timezones of the form 00:00, only 0000\ns = '2012-07-05 14:18:00+00:00'\nprint(s)\nts = datetime.datetime.strptime(s.replace(':',''), \"%Y-%m-%d %H%M%S%z\")\nprint(ts.weekday())\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\nprint(DAYS[ts.weekday()])",
"2012-07-05 14:18:00+00:00\n3\nWed\n"
],
[
"s = tf.constant('2012-07-05 14:18:00+00:00').numpy().decode('utf-8')\nprint(s)\nts = datetime.datetime.strptime(s.replace(':',''), \"%Y-%m-%d %H%M%S%z\")\nprint(ts.weekday())\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\nprint(DAYS[ts.weekday()])",
"2012-07-05 14:18:00+00:00\n3\nWed\n"
],
[
"## Add transformations\ndef euclidean(params):\n lon1, lat1, lon2, lat2 = params\n londiff = lon2 - lon1\n latdiff = lat2 - lat1\n return tf.sqrt(londiff*londiff + latdiff*latdiff)\n\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\ndef get_dayofweek(s):\n # Python 3.5 doesn't handle timezones of the form 00:00, only 0000\n s1 = s.numpy().decode('utf-8') # get Python string\n ts = datetime.datetime.strptime(s1.replace(':',''), \"%Y-%m-%d %H%M%S%z\")\n return DAYS[ts.weekday()]\n\ndef dayofweek(ts_in):\n return tf.map_fn(\n lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),\n ts_in\n )\n\ndef transform(inputs, NUMERIC_COLS, STRING_COLS):\n transformed = inputs.copy()\n print(\"BEFORE TRANSFORMATION\")\n print(\"INPUTS:\", inputs.keys())\n print(inputs['pickup_longitude'].shape)\n feature_columns = {\n colname: tf.feature_column.numeric_column(colname)\n for colname in NUMERIC_COLS\n }\n \n # scale the lat, lon values to be in 0, 1\n for lon_col in ['pickup_longitude', 'dropoff_longitude']: # in range -70 to -78\n transformed[lon_col] = tf.keras.layers.Lambda(\n lambda x: (x+78)/8.0, \n name='scale_{}'.format(lon_col)\n )(inputs[lon_col])\n for lat_col in ['pickup_latitude', 'dropoff_latitude']: # in range 37 to 45\n transformed[lat_col] = tf.keras.layers.Lambda(\n lambda x: (x-37)/8.0, \n name='scale_{}'.format(lat_col)\n )(inputs[lat_col])\n\n # add Euclidean distance. Doesn't have to be accurate calculation because NN will calibrate it\n transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([\n inputs['pickup_longitude'],\n inputs['pickup_latitude'],\n inputs['dropoff_longitude'],\n inputs['dropoff_latitude']\n ])\n feature_columns['euclidean'] = tf.feature_column.numeric_column('euclidean')\n \n # hour of day from timestamp of form '2010-02-08 09:17:00+00:00'\n transformed['hourofday'] = tf.keras.layers.Lambda(\n lambda x: tf.strings.to_number(tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),\n name='hourofday'\n )(inputs['pickup_datetime'])\n feature_columns['hourofday'] = tf.feature_column.indicator_column(\n tf.feature_column.categorical_column_with_identity('hourofday', num_buckets=24))\n\n\n # day of week is hard because there is no TensorFlow function for date handling\n transformed['dayofweek'] = tf.keras.layers.Lambda(\n lambda x: dayofweek(x),\n name='dayofweek_pyfun'\n )(inputs['pickup_datetime'])\n transformed['dayofweek'] = tf.keras.layers.Reshape((), name='dayofweek')(transformed['dayofweek'])\n feature_columns['dayofweek'] = tf.feature_column.indicator_column(\n tf.feature_column.categorical_column_with_vocabulary_list(\n 'dayofweek', vocabulary_list = DAYS))\n \n # featurecross lat, lon into nxn buckets, then embed\n # b/135479527\n #nbuckets = NBUCKETS\n #latbuckets = np.linspace(0, 1, nbuckets).tolist()\n #lonbuckets = np.linspace(0, 1, nbuckets).tolist()\n #b_plat = tf.feature_column.bucketized_column(feature_columns['pickup_latitude'], latbuckets)\n #b_dlat = tf.feature_column.bucketized_column(feature_columns['dropoff_latitude'], latbuckets)\n #b_plon = tf.feature_column.bucketized_column(feature_columns['pickup_longitude'], lonbuckets)\n #b_dlon = tf.feature_column.bucketized_column(feature_columns['dropoff_longitude'], lonbuckets)\n #ploc = tf.feature_column.crossed_column([b_plat, b_plon], nbuckets * nbuckets)\n #dloc = tf.feature_column.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)\n #pd_pair = tf.feature_column.crossed_column([ploc, dloc], nbuckets ** 4 )\n #feature_columns['pickup_and_dropoff'] = tf.feature_column.embedding_column(pd_pair, 100)\n\n print(\"AFTER TRANSFORMATION\")\n print(\"TRANSFORMED:\", transformed.keys())\n print(\"FEATURES\", feature_columns.keys()) \n return transformed, feature_columns\n\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) \n\ndef build_dnn_model():\n # input layer is all float except for pickup_datetime which is a string\n STRING_COLS = ['pickup_datetime']\n NUMERIC_COLS = set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)\n print(STRING_COLS)\n print(NUMERIC_COLS)\n inputs = {\n colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')\n for colname in NUMERIC_COLS\n }\n inputs.update({\n colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')\n for colname in STRING_COLS\n })\n \n # transforms\n transformed, feature_columns = transform(inputs, NUMERIC_COLS, STRING_COLS)\n dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(transformed)\n\n # two hidden layers of [32, 8] just in like the BQML DNN\n h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)\n h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)\n\n # final output would normally have a linear activation because this is regression\n # However, we know something about the taxi problem (fares are +ve and tend to be below $60).\n # Use that here. (You can verify by running this query):\n # SELECT APPROX_QUANTILES(fare_amount, 100) FROM serverlessml.cleaned_training_data\n # b/136476088\n #fare_thresh = lambda x: 60 * tf.keras.activations.relu(x)\n #output = tf.keras.layers.Dense(1, activation=fare_thresh, name='fare')(h2)\n output = tf.keras.layers.Dense(1, name='fare')(h2)\n \n model = tf.keras.models.Model(inputs, output)\n model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])\n return model\n\nmodel = build_dnn_model()\nprint(model.summary())",
"WARNING: Logging before flag parsing goes to stderr.\nW0701 17:34:11.270152 140061452654336 deprecation.py:323] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4281: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\nW0701 17:34:11.271269 140061452654336 deprecation.py:323] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4336: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\nW0701 17:34:11.334257 140061452654336 deprecation.py:323] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4336: IdentityCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\n"
],
[
"tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')",
"_____no_output_____"
]
],
[
[
"## Train model\n\nTo train the model, call model.fit()",
"_____no_output_____"
]
],
[
[
"trainds = load_dataset('taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)\nevalds = load_dataset('taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//10000) # evaluate on 1/10 final evaluation set\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nshutil.rmtree('{}/checkpoints/'.format(OUTDIR), ignore_errors=True)\ncheckpoint_path = '{}/checkpoints/taxi'.format(OUTDIR)\ncp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path, \n save_weights_only=True,\n verbose=1)\n\nhistory = model.fit(trainds, \n validation_data=evalds,\n epochs=NUM_EVALS, \n steps_per_epoch=steps_per_epoch,\n callbacks=[cp_callback])",
"W0701 17:34:15.675758 140061452654336 deprecation.py:323] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/data/experimental/ops/readers.py:498: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_determinstic`.\nW0701 17:34:15.693717 140061452654336 deprecation.py:323] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/data/experimental/ops/readers.py:211: shuffle_and_repeat (from tensorflow.python.data.experimental.ops.shuffle_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.\n"
],
[
"# plot\nimport matplotlib.pyplot as plt\nnrows = 1\nncols = 2\nfig = plt.figure(figsize=(10, 5))\n\nfor idx, key in enumerate(['loss', 'rmse']):\n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history['val_{}'.format(key)])\n plt.title('model {}'.format(key))\n plt.ylabel(key)\n plt.xlabel('epoch')\n plt.legend(['train', 'validation'], loc='upper left');",
"_____no_output_____"
]
],
[
[
"## Evaluate over full validation dataset\n\nLet's evaluate over the full validation dataset (provided the validation dataset is large enough).",
"_____no_output_____"
]
],
[
[
"evalds = load_dataset('taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)\nmodel.evaluate(evalds)",
"10/10 [==============================] - 2s 175ms/step - loss: 58.7310 - rmse: 7.6581 - mse: 58.7310\n"
]
],
[
[
"Yippee! We are now at under 4 dollars RMSE!",
"_____no_output_____"
],
[
"## Predict with model\n\nThis is how to predict with this model:",
"_____no_output_____"
]
],
[
[
"model.predict({\n 'pickup_longitude': tf.convert_to_tensor([-73.982683]),\n 'pickup_latitude': tf.convert_to_tensor([40.742104]),\n 'dropoff_longitude': tf.convert_to_tensor([-73.983766]),\n 'dropoff_latitude': tf.convert_to_tensor([40.755174]),\n 'passenger_count': tf.convert_to_tensor([3.0]),\n 'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00+00:00'], dtype=tf.string),\n})",
"_____no_output_____"
]
],
[
[
"However, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.",
"_____no_output_____"
],
[
"## Export model\n\nLet's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to \"serve\" the model, from a web application, from JavaScript, from mobile applications, etc.",
"_____no_output_____"
]
],
[
[
"export_dir = os.path.join(EXPORT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))\ntf.keras.experimental.export_saved_model(model, export_dir)\nprint(export_dir)\n\n# Recreate the exact same model\nnew_model = tf.keras.experimental.load_from_saved_model(export_dir)\n\n# try predicting with this model\nnew_model.predict({\n 'pickup_longitude': tf.convert_to_tensor([-73.982683]),\n 'pickup_latitude': tf.convert_to_tensor([40.742104]),\n 'dropoff_longitude': tf.convert_to_tensor([-73.983766]),\n 'dropoff_latitude': tf.convert_to_tensor([40.755174]),\n 'passenger_count': tf.convert_to_tensor([3.0]), \n 'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00+00:00'], dtype=tf.string),\n})",
"W0701 17:34:54.258321 140061452654336 deprecation.py:323] From <ipython-input-14-59bd946d4a0c>:2: export_saved_model (from tensorflow.python.keras.saving.saved_model_experimental) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `model.save(..., save_format=\"tf\")` or `tf.keras.models.save_model(..., save_format=\"tf\")`.\nW0701 17:34:54.473381 140061452654336 deprecation.py:506] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1624: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nW0701 17:34:55.300076 140061452654336 deprecation.py:323] From /home/jupyter/.local/lib/python3.5/site-packages/tensorflow_core/python/saved_model/signature_def_utils_impl.py:253: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.\nW0701 17:34:55.302366 140061452654336 export_utils.py:182] Export includes no default signature!\nW0701 17:34:55.828526 140061452654336 export_utils.py:182] Export includes no default signature!\nW0701 17:34:56.309083 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer\nW0701 17:34:56.310736 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer.iter\nW0701 17:34:56.313190 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer.beta_1\nW0701 17:34:56.315279 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer.beta_2\nW0701 17:34:56.316137 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer.decay\nW0701 17:34:56.319124 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer.learning_rate\nW0701 17:34:56.319861 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.kernel\nW0701 17:34:56.321615 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.bias\nW0701 17:34:56.322386 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel\nW0701 17:34:56.323078 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias\nW0701 17:34:56.326979 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-2.kernel\nW0701 17:34:56.327717 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-2.bias\nW0701 17:34:56.328375 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.kernel\nW0701 17:34:56.329207 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.bias\nW0701 17:34:56.330018 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel\nW0701 17:34:56.330791 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias\nW0701 17:34:56.331533 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-2.kernel\nW0701 17:34:56.336975 140061452654336 util.py:144] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-2.bias\nW0701 17:34:56.337713 140061452654336 util.py:152] A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.\nW0701 17:34:56.342913 140061452654336 deprecation.py:323] From <ipython-input-14-59bd946d4a0c>:6: load_from_saved_model (from tensorflow.python.keras.saving.saved_model_experimental) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe experimental save and load functions have been deprecated. Please switch to `tf.keras.models.load_model`.\n"
]
],
[
[
"In this notebook, we have looked at how to implement a custom Keras model using feature columns.",
"_____no_output_____"
],
[
"Copyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb580fe6bf7f836db51f43f71ee1665c601fa17d | 51,397 | ipynb | Jupyter Notebook | _oldnotebooks/Basic_Sequence_Analysis.ipynb | eneskemalergin/OldBlog | 3fc3c516b40c1cba7dec18eb461e4dd90f8cb5bb | [
"MIT"
] | null | null | null | _oldnotebooks/Basic_Sequence_Analysis.ipynb | eneskemalergin/OldBlog | 3fc3c516b40c1cba7dec18eb461e4dd90f8cb5bb | [
"MIT"
] | null | null | null | _oldnotebooks/Basic_Sequence_Analysis.ipynb | eneskemalergin/OldBlog | 3fc3c516b40c1cba7dec18eb461e4dd90f8cb5bb | [
"MIT"
] | null | null | null | 97.15879 | 19,912 | 0.829251 | [
[
[
"# Performing Basic Sequence Analysis\n\nNow I am continuing to my bioinformatics cookbook tutorial series. Today's topic is to perform basic sequence analysis which is the basics of Next Generation Sequencing. \n\nWe will do some basic sequence analysis on DNA sequences. FASTA files are our main target on this, also Biopython as a main library of Python.\n\nLet's first download a FASTA sequence",
"_____no_output_____"
]
],
[
[
"from Bio import Entrez, SeqIO\n# Using my email\nEntrez.email = \"[email protected]\"\n# Get the FASTA file \nhdl = Entrez.efetch(db='nucleotide', id=['NM_002299'],rettype='fasta') # Lactase gene\n# Read it and store it in seq\nseq = SeqIO.read(hdl, 'fasta')\nprint \"First 10 and last 10: \" + seq.seq[:10] + \"...\" + seq.seq[-10:]",
"First 10 and last 10: GTTCCTAGAA...CTGTCCTTTC\n"
]
],
[
[
"- Let's save the Biopython object in FASTA file;",
"_____no_output_____"
]
],
[
[
"from Bio import SeqIO\n# Open a new fasta file and make it ready to write on\nw_hdl = open('example.fasta', 'w')\n# specify the part to write\nw_seq = seq[11:5795]\n# Write it\nSeqIO.write([w_seq], w_hdl, 'fasta')\n# And of course close it\nw_hdl.close()",
"_____no_output_____"
]
],
[
[
"> If you want to write many sequences (easily millions with NGS), do not use a list, as shown in the preceding code because this will allocate massive amounts of memory.Either use an iterator or use the ```SeqIO.write``` function several times with a subset of sequence on each write.\n\n- We need to read the sequence of course to be able to use it",
"_____no_output_____"
]
],
[
[
"# Parse the fasta file and store it in recs\nrecs = SeqIO.parse('example.fasta', 'fasta')\n# Iterate over each records\nfor rec in recs:\n # Get the sequences of each rec\n seq = rec.seq\n # Show the desription\n print(rec.description)\n # Show the first 10 letter in sequence\n print(seq[:10])\n # \n print(seq.alphabet)",
"gi|32481205|ref|NM_002299.2| Homo sapiens lactase (LCT), mRNA\nATGGAGCTGT\nSingleLetterAlphabet()\n"
]
],
[
[
"In our example code we have only 1 sequence in 1 FASTA file so we did not have to iterate through each record. Since we won't know each time how many records we will have in FASTA the code above is suitable for most cases.\n\n> The first line of FASTA file is description of the gene, in this case : ```gi|32481205|ref|NM_002299.2| Homo sapiens lactase (LCT), mRNA```\n\n> The second line is the first 10 lettern in sequence\n\n> The last line is shows how the sequence represented\n\n- Now let's change the alphabet of the sequence we got:\n\n> We create a new sequence with a more informative alphabet.",
"_____no_output_____"
]
],
[
[
"from Bio import Seq\nfrom Bio.Alphabet import IUPAC\nseq = Seq.Seq(str(seq), IUPAC.unambiguous_dna)",
"_____no_output_____"
]
],
[
[
"- Now have an unambiguous DNA, we can transcribe it as follows:",
"_____no_output_____"
]
],
[
[
"rna = Seq.Seq(str(seq), IUPAC.unambiguous_dna)\nrna = seq.transcribe() # Changing DNA into RNA\nprint \"some of the rna variable: \"+rna[:10]+\"...\"+rna[-10:]",
"some of the rna variable: AUGGAGCUGU...UUCAUUCUGA\n"
]
],
[
[
"> Note that the ```Seq``` constructor takes a string, not a sequence. You will see that the alphabet of the ```rna``` variable is now ```IUPACUnambigousRNA```.\n\n- Finally let's translate it into Protein:",
"_____no_output_____"
]
],
[
[
"prot = seq.translate() # Changing RNA into corresponding Protein\nprint \"some of the resulting protein sequence: \"+prot[:10]+\"...\"+prot[-10:]",
"some of the resulting protein sequence: MELSWHVVFI...QELSPVSSF*\n"
]
],
[
[
"Now, we have a protein alphabet with the annotation that there is a stop codon (so, our protein is complete).\n\n---\n\nThere are other files to store and represent sequences and we talked about some of them in the [first blog post of the series](http://eneskemalergin.github.io/2015/10/11/Getting_Started_NGS/). Now I will show you how to work with modern file formats such as FASTQ format.\n\nFASTQ files are the standard format output by modern sequencers. The purpose of the following content is to make you comfortable with quality scores and how to work with them. To be able to explain the concept we will use real big data from \"1000 Genomes Project\"\n\n> Next-generation datasets are generally very large like 1000 Genomes Project. You will need to download some stuff so, get ready to wait :)\n\nLet's Start by downloading the dataset: (BTW the following snippet is for IPython NB so if you are following this from my blog go ahead and [click here](ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz))\n",
"_____no_output_____"
]
],
[
[
"!wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz",
"--2015-10-26 08:21:31-- ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz\n => 'SRR003265.filt.fastq.gz.1'\nResolving ftp.1000genomes.ebi.ac.uk... 193.62.192.8\nConnecting to ftp.1000genomes.ebi.ac.uk|193.62.192.8|:21... connected.\nLogging in as anonymous ... Logged in!\n==> SYST ... done. ==> PWD ... done.\n==> TYPE I ... done. ==> CWD (1) /vol1/ftp/phase3/data/NA18489/sequence_read ... done.\n==> SIZE SRR003265.filt.fastq.gz ... 28919712\n==> PASV ... done. ==> RETR SRR003265.filt.fastq.gz ... done.\nLength: 28919712 (28M) (unauthoritative)\n\nSRR003265.filt.fast 100%[=====================>] 27.58M 1.43MB/s in 15s \n\n2015-10-26 08:21:49 (1.88 MB/s) - 'SRR003265.filt.fastq.gz.1' saved [28919712]\n\n"
]
],
[
[
"Now we have file \"SRR003265.filt.fastq.gz\" which has 3 extensions, 1 is fastq so we are fine. The last one ```gz``` is the thing we will solve with Pyhton Library while we are opening it.\n\n- First we need to open the file:",
"_____no_output_____"
]
],
[
[
"import gzip # This is the library we need to unzip .gz\nfrom Bio import SeqIO # The usual SeqIO\n# Unzip and read the fastq file at the end store it in recs\nrecs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')\nrec = next(recs)\n# Print the id, description and sequence of the record\nprint(rec.id, rec.description, rec.seq)\n# Print the letter_annotations\n# Biopython will convert all the Phred encoding letters to logarithmic scores\nprint(rec.letter_annotations)",
"('SRR003265.31', 'SRR003265.31 3042NAAXX:3:1:1252:1819 length=51', Seq('GGGAAAAGAAAAACAAACAAACAAAAACAAAACACAGAAACAAAAAAACCA', SingleLetterAlphabet()))\n{'phred_quality': [40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 23, 40, 32, 35, 29, 40, 16, 40, 40, 32, 35, 31, 40, 40, 39, 22, 40, 24, 20, 28, 31, 12, 31, 10, 22, 28, 13, 26, 20, 23, 23]}\n"
]
],
[
[
"> You should usually store your FASTQ files in a compressed format, for space saving and processing time saving's sake.\n\n> Don't use list(recs), if you don't want to sacrife a lot of memory, since FASTQ files are usualy big ones.\n\n- Then, let's take a look at the distribution of nucleotide reads:",
"_____no_output_____"
]
],
[
[
"from collections import defaultdict\n# Unzip and read the fastq file\nrecs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')\n# Make integer dictionary \ncnt = defaultdict(int)\n# Iterate over records\nfor rec in recs:\n # In each letter of the sequence\n for letter in rec.seq:\n # Count the letters and store the number of count in dictionary cnt\n cnt[letter] += 1\n# Find the total of cnt counts\ntot = sum(cnt.values())\n# Iterate over the dictionary cnt\nfor letter, cnt_value in cnt.items():\n print('%s: %.2f %d' % (letter, 100. * cnt_value / tot, cnt_value))\n # Prints the following\n # For each Letter inside\n # Print the percentage of apperance in sequences\n # and the total number of letter \n # Do this for each letter (even for NONE(N))",
"A: 28.60 7411965\nC: 21.00 5444053\nT: 29.58 7666885\nG: 20.68 5359334\nN: 0.14 37289\n"
]
],
[
[
"> Note that there is a residual number for N calls. These are calls in which a sequencer reports an unknown base.\n\n- Now, let's plot the distribution of Ns according to its read position:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \n# Plot it in IPython Directly\n# Calling libraries\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n# Again unzip, read the fastq file \nrecs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'), 'fastq')\n# Make a dictionary\nn_cnt = defaultdict(int)\n# The same code as before until here \n# iterate through the file and get the position of any references to N.\nfor rec in recs:\n for i, letter in enumerate(rec.seq):\n pos = i + 1\n if letter == 'N':\n n_cnt[pos] += 1 \nseq_len = max(n_cnt.keys())\npositions = range(1, seq_len + 1)\nfig, ax = plt.subplots()\nax.plot(positions, [n_cnt[x] for x in positions])\nax.set_xlim(1, seq_len)",
"_____no_output_____"
]
],
[
[
"> Until position 25, there are no errors. This is not what you will get from a typical sequencer output, because Our example file is already filtered and the 1000 genomes filtering rules enforce that no N calls can occur before position 25.\n\n> the quantity of uncalled bases is positiondependent.\n\n- So, what about the quality of reads?\n - Let's study the distribution of Phred scores and plot the distribution of qualities according to thei read position:",
"_____no_output_____"
]
],
[
[
"# Reopen and read\nrecs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')\n# default dictionary\nqual_pos = defaultdict(list)\n\nfor rec in recs:\n for i, qual in enumerate(rec.letter_annotations['phred_quality']):\n if i < 25 or qual == 40:\n continue\n pos = i + 1\n qual_pos[pos].append(qual)\nvps = []\nposes = qual_pos.keys()\nposes.sort()\nfor pos in poses:\n vps.append(qual_pos[pos])\nfig, ax = plt.subplots()\nax.boxplot(vps)\nax.set_xticklabels([str(x) for x in range(26, max(qual_pos.keys()) + 1)])",
"_____no_output_____"
]
],
[
[
"> We will ignore both positions sequenced 25 base pairs from start (again, remove this rule if you have unfiltered sequencer data) and the maximum quality score for this file (40). However, in your case, you can consider starting your plotting analysis also with the maximum. You may want to check the maximum possible value for your sequencer hardware. Generally, as most calls can be performed with maximum quality, you may want to remove them if you are trying to understand where quality problems lie.\n\n---",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb581ebe021121d81899c5b4ddcd43008e6dd970 | 448,691 | ipynb | Jupyter Notebook | plots of trigonometric functions.ipynb | kchari29/HKC | 42acb71a26d2eb515afe0579a8994d2b06ae83f7 | [
"MIT"
] | null | null | null | plots of trigonometric functions.ipynb | kchari29/HKC | 42acb71a26d2eb515afe0579a8994d2b06ae83f7 | [
"MIT"
] | null | null | null | plots of trigonometric functions.ipynb | kchari29/HKC | 42acb71a26d2eb515afe0579a8994d2b06ae83f7 | [
"MIT"
] | null | null | null | 424.89678 | 65,404 | 0.939491 | [
[
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\npi = np.pi\nx = np.linspace(-4*pi, 4*pi, 1000)\nplt.plot(x, np.sin(x)/x)\nplt.show()\n\n",
"_____no_output_____"
],
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\npi = np.pi\nx = np.linspace(-4*pi, 4*pi, 1000)\nplt.plot(x, np.cos(x)/x)\nplt.show()",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\npi = np.pi\nx = np.linspace(-4*pi, 4*pi, 1000)\nplt.plot(x, np.tan(x)/x)\nplt.show()",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\npi = np.pi\nx = np.linspace(-4*pi, 4*pi, 1000)\nplt.plot(x, np.sin(x))\nplt.show()",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\npi = np.pi\nx = np.linspace(-4*pi, 4*pi, 1000)\nplt.plot(x, np.sinh(x)/x)\nplt.show()",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\npi = np.pi\nx = np.linspace(-100*pi, 100*pi, 1000)\nplt.plot(x, np.tanh(x))\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nx = np.linspace(0,2,100)\n# Note that even in the OO-style, we use `.pyplot.figure` to create the figure.\nfig, ax = plt.subplots() # Create a figure and an axes.\nax.plot(x, x, label='linear') # Plot some data on the axes.\nax.plot(x, x**2, label='quadratic') # Plot more data on the axes...\nax.plot(x, x**3, label='cubic') # ... and some more.\nax.set_xlabel('x label') # Add an x-label to the axes.\nax.set_ylabel('y label') # Add a y-label to the axes.\nax.set_title(\"Simple Plot\") # Add a title to the axes.\nax.legend() # Add a legend.\n",
"_____no_output_____"
],
[
"x = np.linspace(0, 2, 100)\nplt.plot(x, x, label='linear') # Plot some data on the (implicit) axes.\nplt.plot(x, x**2, label='quadratic') # etc.\nplt.plot(x, x**3, label='cubic')\nplt.xlabel('x label')\nplt.ylabel('y label')\nplt.title(\"Simple Plot\")\nplt.legend()",
"_____no_output_____"
],
[
"def my_plotter(ax, data1, data2, param_dict):\n \"\"\"\n A helper function to make a graph\n\n Parameters\n ----------\n ax : Axes\n The axes to draw to\n\n data1 : array\n The x data\n\n data2 : array\n The y data\n\n param_dict : dict\n Dictionary of kwargs to pass to ax.plot\n\n Returns\n -------\n out : list\n list of artists added\n \"\"\"\n out = ax.plot(data1, data2, **param_dict)\n return out\n",
"_____no_output_____"
],
[
"data1, data2, data3, data4 = np.random.randn(4, 100)\nfig, ax = plt.subplots(1, 1)\nmy_plotter(ax, data1, data2, {'marker': 'x'})",
"_____no_output_____"
]
],
[
[
"# if you wanted to have 2 sub-plots:",
"_____no_output_____"
]
],
[
[
"fig, (ax1, ax2) = plt.subplots(1, 2)\nmy_plotter(ax1, data1, data2, {'marker': 'x'})\nmy_plotter(ax2, data3, data4, {'marker': 'o'})",
"_____no_output_____"
],
[
"plt.ioff()\nfor i in range(3):\n plt.plot(np.random.rand(10))\n plt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\n# Setup, and create the data to plot\ny = np.random.rand(100000)\ny[50000:] *= 2\ny[np.logspace(1, np.log10(50000), 400).astype(int)] = -1\nmpl.rcParams['path.simplify'] = True\n\nmpl.rcParams['path.simplify_threshold'] = 0.0\nplt.plot(y)\nplt.show()\n\nmpl.rcParams['path.simplify_threshold'] = 1.0\nplt.plot(y)\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\n\nlabels = ['G1', 'G2', 'G3', 'G4', 'G5']\nmen_means = [20, 35, 30, 35, 27]\nwomen_means = [25, 32, 34, 20, 25]\nmen_std = [2, 3, 4, 1, 2]\nwomen_std = [3, 5, 2, 3, 3]\nwidth = 0.35 # the width of the bars: can also be len(x) sequence\n\nfig, ax = plt.subplots()\n\nax.bar(labels, men_means, width, yerr=men_std, label='Men')\nax.bar(labels, women_means, width, yerr=women_std, bottom=men_means,\n label='Women')\n\nax.set_ylabel('Scores')\nax.set_title('Scores by group and gender')\nax.legend()\n\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\n# Fixing random state for reproducibility\nnp.random.seed(19680801)\n\ndt = 0.01\nt = np.arange(0, 30, dt)\nnse1 = np.random.randn(len(t)) # white noise 1\nnse2 = np.random.randn(len(t)) # white noise 2\n\n# Two signals with a coherent part at 10Hz and a random part\ns1 = np.sin(2 * np.pi * 10 * t) + nse1\ns2 = np.sin(2 * np.pi * 10 * t) + nse2\n\nfig, axs = plt.subplots(2, 1)\naxs[0].plot(t, s1, t, s2)\naxs[0].set_xlim(0, 2)\naxs[0].set_xlabel('time')\naxs[0].set_ylabel('s1 and s2')\naxs[0].grid(True)\n\ncxy, f = axs[1].cohere(s1, s2, 256, 1. / dt)\naxs[1].set_ylabel('coherence')\n\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\n# example data\nx = np.arange(0.1, 4, 0.1)\ny1 = np.exp(-1.0 * x)\ny2 = np.exp(-0.5 * x)\n\n# example variable error bar values\ny1err = 0.1 + 0.1 * np.sqrt(x)\ny2err = 0.1 + 0.1 * np.sqrt(x/2)\n\n\n# Now switch to a more OO interface to exercise more features.\nfig, (ax_l, ax_c, ax_r) = plt.subplots(nrows=1, ncols=3,\n sharex=True, figsize=(12, 6))\n\nax_l.set_title('all errorbars')\nax_l.errorbar(x, y1, yerr=y1err)\nax_l.errorbar(x, y2, yerr=y2err)\n\nax_c.set_title('only every 6th errorbar')\nax_c.errorbar(x, y1, yerr=y1err, errorevery=6)\nax_c.errorbar(x, y2, yerr=y2err, errorevery=6)\n\nax_r.set_title('second series shifted by 3')\nax_r.errorbar(x, y1, yerr=y1err, errorevery=(0, 6))\nax_r.errorbar(x, y2, yerr=y2err, errorevery=(3, 6))\n\nfig.suptitle('Errorbar subsampling for better appearance')\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline",
"_____no_output_____"
],
[
"x=[31.13745\n513.76786\n98.08295\n171.25595\n26.46683\n593.16834\n4.67062\n298.91948\n146.346\n330.05693\n130.77727\n345.62565\n322.27257\n174.3697\n244.42895\n286.4645\n1080.46937\n2947.15926\n213.2915\n1184.77982\n418.79865\n702.14941\n110.53793\n309.81759\n1345.13766\n1068.01439\n501.31288\n1155.19924\n2028.6046\n976.15893\n2568.83929\n1032.20633\n2903.56683\n734.84372\n2372.67338\n2048.84394\n6443.89443\n854.72289\n1049.33193\n846.93853\n2671.59286\n902.98593\n4899.47712\n753.52619\n1113.16369\n4377.9249\n4164.63339\n5126.78047\n6081.14319\n3149.55265\n3018.77538\n1890.04297\n6053.11949\n8534.77393\n6325.57214\n1684.53582\n9439.31673\n4446.42728\n7004.36846\n6963.88978\n10960.38096\n6457.90628\n3879.72576\n5199.95347\n7164.72631\n74.72987\n8017.89232\n10580.50412\n3529.42949\n6383.17641\n7806.15769\n7029.27842\n6896.94427\n6014.19768\n5998.62896\n10471.52306\n9629.25515\n8413.33789\n10563.37853\n6548.20488\n10673.91646\n7619.33302\n11592.47112\n8786.98724\n17770.14039\n9282.07263\n7021.49405\n5676.35639\n6499.94184\n4461.996\n4032.29925\n19795.63124\n11769.95456\n12068.87404\n12813.05899\n9907.93529\n7989.86862\n7343.76662\n11919.4143\n12336.65607\n7142.93009\n12241.68686\n6426.76884\n8555.01327\n10272.24341\n5517.55542\n3362.84416\n9700.87128\n10733.07761\n11061.57766\n7985.19801\n10199.07041\n17913.37264\n10711.2814\n5206.18096\n11785.52328\n9778.71489\n5884.97728\n8010.10796\n5729.29005\n5793.12181\n3571.46505\n10211.52539\n4497.80406\n4787.38231\n6062.46072\n6383.17641\n5064.50558\n4584.98891\n12881.56138\n4499.36094\n3420.44843\n2287.0454\n2051.95769\n2534.5881\n1500.82489\n2861.53128\n147.90287\n3205.60006\n3649.30866\n3130.87019\n411.01429\n3017.21851\n794.00487\n2184.29183\n3825.23523\n2799.25639\n71.61613\n1499.26802\n256.88393\n586.94086\n267.78203\n130.77727\n303.5901\n236.64459\n261.55455\n121.43604\n523.10909\n278.68014\n249.09957\n194.60904\n29.58057\n0\n1.55687\n18.68247\n1.55687\n17.1256\n294.24886\n359.6375\n0\n194.60904\n26.46683\n48.26304\n12.45498\n66.94551\n28.0237\n38.92181]\ny=[20.02045\n22.53706\n19.76639\n29.74577\n30.54583\n18.08904\n17.38001\n18.92677\n28.29161\n19.95979\n17.31224\n21.83106\n34.43798\n38.77548\n19.32044\n18.15695\n20.99185\n35.56716\n23.35302\n36.80261\n29.67645\n43.26466\n17.65234\n25.95369\n26.90249\n33.25161\n19.6972\n47.30997\n36.17422\n15.03087\n22.46046\n28.87446\n38.74643\n24.32572\n55.13447\n39.74956\n48.0014\n28.78154\n34.04608\n35.0192\n30.50575\n21.83062\n54.2865\n30.26874\n27.08733\n42.23335\n33.17988\n45.17475\n37.19604\n41.56082\n30.83675\n47.73186\n54.5239\n45.49809\n41.36297\n43.31118\n54.99624\n49.43302\n58.01368\n66.8944\n77.29361\n49.75644\n45.22979\n55.39367\n56.6017\n67.05372\n66.02036\n66.14054\n52.69512\n57.86693\n64.66502\n46.73512\n76.61953\n92.96712\n94.47939\n131.95119\n95.81957\n92.1434\n95.52797\n98.40329\n94.16242\n124.21348\n149.43851\n79.63067\n95.33529\n104.60758\n78.20073\n159.51689\n133.12523\n146.49849\n184.95519\n107.22993\n164.68631\n120.5651\n155.16447\n157.83213\n156.45957\n148.63799\n144.4465\n148.73119\n148.17544\n144.70658\n154.65209\n128.86943\n139.48889\n182.32727\n142.93312\n176.04107\n139.0309\n128.9639\n185.60657\n170.79296\n153.18813\n165.2832\n179.67027\n184.32396\n165.56155\n121.94358\n92.14223\n81.13095\n104.22317\n97.15773\n139.5961\n82.14449\n115.60005\n147.97317\n109.50184\n89.34629\n105.83884\n130.6108\n163.60871\n132.88827\n63.95976\n30.62878\n23.92871\n37.9417\n41.07415\n48.80755\n48.29521\n39.51854\n78.94565\n25.56546\n46.15475\n54.18953\n74.32862\n89.13755\n51.08192\n26.00067\n22.73247\n19.78375\n24.11613\n27.02586\n23.82791\n23.48308\n23.45353\n42.51956\n19.57252\n22.485\n26.35474\n25.00031\n47.96181\n23.87384\n22.2708\n16.50245\n19.27467\n19.63548\n21.59698\n26.41076\n21.64263\n11.94924\n28.11239\n17.10387\n26.49114\n18.2528\n16.00866\n17.5329\n21.08909]",
"_____no_output_____"
],
[
"plt.scatter(x,y)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5825785a603a09529c176dc6d3bb4cd60e32dc | 555,325 | ipynb | Jupyter Notebook | DSM-Modelling.ipynb | windnode/SinkDSM_example | 575ad6b2e541c02f9c9b5c70eb79f843408141b4 | [
"MIT"
] | null | null | null | DSM-Modelling.ipynb | windnode/SinkDSM_example | 575ad6b2e541c02f9c9b5c70eb79f843408141b4 | [
"MIT"
] | null | null | null | DSM-Modelling.ipynb | windnode/SinkDSM_example | 575ad6b2e541c02f9c9b5c70eb79f843408141b4 | [
"MIT"
] | null | null | null | 633.932648 | 64,160 | 0.940278 | [
[
[
"# DSM - Modelling",
"_____no_output_____"
],
[
"- ploltting.py is imported to facilitate in visualization",
"_____no_output_____"
],
[
"\n$ (0) \\quad \\dot{E}_{t} \\quad = \\quad demand_{t} \\quad + \\quad DSM_{t}^{up} \\quad - \\quad\n \\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} \\qquad \\forall t $\n\n### Formulation after Zerrahn & Schill\n \n$ (1) \\quad DSM_{t}^{up} \\quad = \\quad \\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do}\n \\qquad \\forall t $\n \n$ (2) \\quad DSM_{t}^{up} \\quad \\leq \\quad E_{t}^{up} \\qquad \\forall t $\n\n$ (3) \\quad \\sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \\quad \\leq \\quad E_{t}^{do}\n \\qquad \\forall tt $\n\n$ (4) \\quad DSM_{tt}^{up} \\quad + \\quad \\sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \\quad\n \\leq \\quad max \\{ E_{t}^{up}, E_{t}^{do} \\} \\qquad \\forall tt $\n",
"_____no_output_____"
],
[
"**Table: Symbols and attribute names of variables V and parameters P**\n\n|symbol | attribute | type|explanation|\n|-------------------|-------------------|----|--------------------------------------|\n|$DSM_{t}^{up} $ | `dsm_do[g,t,tt] `| $V$| DSM up shift (additional load) |\n|$DSM_{t,tt}^{do}$ | `dsm_up[g,t]` | $V$| DSM down shift (less load) |\n|$\\dot{E}_{t} $ |`flow[g,t]` | $V$| Energy flowing in from electrical bus|\n|$L$ |`delay_time` | $P$| Delay time for load shift |\n|$demand_{t} $ | `demand[t]` | $P$| Electrical demand series |\n|$E_{t}^{do}$ |`capacity_down[tt]`| $P$| Capacity DSM down shift |\n|$E_{t}^{up} $ |`capacity_up[tt]` | $P$| Capacity DSM up shift |",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"from oemof import solph, outputlib\nfrom oemof.network import Node\nimport pandas as pd\nimport os\n\n# plot_dsm.py\nimport plotting as plt_dsm\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"## base dataset\nplt_dsm.make_directory('graphics')",
"----------------------------------------------------------\nCreated folder \"graphics\" in current directory.\n----------------------------------------------------------\n"
]
],
[
[
"## Energy Model\nFor the testing, a basic energy system was set up including:\n\n- Coal PP\n- Wind PP\n- PV PP\n- DSM Sink\n- shortage\n- excess\n",
"_____no_output_____"
]
],
[
[
"def create_model(data, datetimeindex, directory, project, method, delay_time, shift_interval):\n \n\n # ----------------- Energy System ----------------------------\n \n # Create Energy System\n es = solph.EnergySystem(timeindex=datetimeindex)\n Node.registry = es\n\n # Create Busses\n b_coal_1 = solph.Bus(label='bus_coal_1')\n b_elec = solph.Bus(label='bus_elec')\n\n # Create Sources\n s_coal_p1 = solph.Source(label='source_coal_p1',\n outputs={\n b_coal_1: solph.Flow(\n nominal_value=10000,\n variable_costs=10)}\n )\n\n s_wind = solph.Source(label='wind',\n outputs={\n b_elec: solph.Flow(\n actual_value=data['wind'][datetimeindex],\n fixed=True,\n nominal_value=1)}\n )\n\n s_pv = solph.Source(label='pv',\n outputs={\n b_elec: solph.Flow(\n actual_value=data['pv'][datetimeindex],\n fixed=True,\n nominal_value=1)}\n )\n\n # Create Transformer\n cfp_1 = solph.Transformer(label='pp_coal_1',\n inputs={b_coal_1: solph.Flow()},\n outputs={\n b_elec: solph.Flow(\n variable_costs=0)},\n conversion_factors={b_elec: 1}\n )\n\n # Create DSM\n demand_dsm = solph.custom.SinkDSM(label='demand_dsm',\n inputs={b_elec: solph.Flow(variable_costs=2)},\n demand=data['demand_el'][datetimeindex],\n capacity_up=data['Cap_up'][datetimeindex],\n capacity_down=data['Cap_do'][datetimeindex],\n method=method,\n delay_time=delay_time,\n shift_interval=shift_interval,\n #recovery_time=1\n )\n\n # Backup excess / shortage\n excess = solph.Sink(label='excess_el',\n inputs={b_elec: solph.Flow(variable_costs=1)}\n )\n\n s_shortage_el = solph.Source(label='shortage_el',\n outputs={\n b_elec: solph.Flow(\n variable_costs=200)}\n )\n\n # -------------------------- Create Model ----------------------\n\n # Create Model\n model = solph.Model(es)\n\n # Solve Model\n model.solve(solver='cbc', solve_kwargs={'tee': False})\n\n # Write LP File\n filename = os.path.join(os.path.dirname('__file__'), directory, project +'.lp')\n model.write(filename, io_options={'symbolic_solver_labels': True})\n\n # Save Results\n es.results['main'] = outputlib.processing.results(model)\n es.dump(dpath=None, filename=None)\n\n return model",
"_____no_output_____"
]
],
[
[
"## Presets",
"_____no_output_____"
]
],
[
[
"def start_model(df_data, timesteps, **kwargs):\n \n method = kwargs.get('method', None)\n delay_time = kwargs.get('delay_time', None)\n shift_interval = kwargs.get('shift_interval', None)\n show = kwargs.get('show', False)\n plot = kwargs.get('plot', False)\n figure_size = kwargs.get('figsize', (10,10))\n \n # ----------------- Input Data & Timesteps ----------------------------\n\n # Provide directory\n project = 'demand_shift_test'\n directory = './'\n\n # Data manipulation\n data = df_data\n\n # Timestamp\n datetimeindex = pd.date_range(start='1/1/2013',\n periods=timesteps,\n freq='H')\n \n # ----------------- Create & Solve Model ----------------------------\n\n # Create model\n model = create_model(data,\n datetimeindex,\n directory,\n project,\n method,\n delay_time,\n shift_interval)\n\n\n # Get Results\n es = solph.EnergySystem()\n es.restore(dpath=None, filename=None)\n\n # Export data\n df_gesamt = plt_dsm.extract_results(model)\n\n # write data in csv\n #df_gesamt.to_csv(directory + project + '_data_dump.csv')\n \n # ----------------- Plot Results ----------------------------\n # Plot\n plt_dsm.plot_dsm(df_gesamt,\n datetimeindex,\n directory,\n timesteps,\n project,\n days=2,\n show=show,\n figsize=figure_size)\n return df_gesamt",
"_____no_output_____"
]
],
[
[
"## base dataset\nFor the limitations of the formulation this test dataset is modified.",
"_____no_output_____"
]
],
[
[
"timesteps = 48\n# test data base\ndemand = [100] * timesteps\npv = [0] * timesteps\ncapup = [100] * timesteps\ncapdo = [100] * timesteps\nwind = [100] * timesteps\n# \nbase = [demand, wind, capup, capdo, pv]\ndf_base = pd.DataFrame(list(zip(*base)))\ndf_base.rename(columns={0:'demand_el',1:'wind', 2:'Cap_up', 3:'Cap_do', 4:'pv'}, inplace=True)\ndf_base['timestamp'] = pd.date_range(start='1/1/2013', periods=timesteps, freq='H')\ndf_base.set_index('timestamp', drop=True, inplace=True)\n",
"_____no_output_____"
]
],
[
[
"# How it should work:",
"_____no_output_____"
]
],
[
[
"# data preperation\nwind = [100] * timesteps\n###### edit specifics\n\n# triple extended\nwind[3:4] = [0]\nwind[38:41] = [200] * 3\n\n# interupting event\nwind[6:7] = [200]\n\ndf_data = df_base.copy()\ndf_data['wind'] = wind\n\n#plot\nfig, ax = plt.subplots(figsize=(10,4))\nax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle=\"steps-post\")\nax = df_data.Cap_up.plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax.set_yticks(range(-100,150,50))\nax.legend(loc=9, ncol=3)\nax.set_ylabel(\"MW or % \")\nplt.show()",
"_____no_output_____"
],
[
"# start model\n_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=3)",
"_____no_output_____"
]
],
[
[
"# limitations of the formulation\n\n\nTo preserve the formulation as a linear problem, the simultaneous activation of \"DSM Up & Down\" cannot be completely prevented. A possible solution with SOS-Variables would end up in a non-convex mixed integer programming problem. Thus leading to an increase in computing time.\n\n\n## extended delay\n\nEquation $(4)$ limits the sum of $DSM_{t}^{up} $ & $DSM_{t}^{down} $ to the value of the max capcacity.\n\n$ (4) \\quad DSM_{tt}^{up} \\quad + \\quad \\sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \\quad\n \\leq \\quad max \\{ E_{t}^{up}, E_{t}^{do} \\} \\qquad \\forall tt $\n\n\nIf this capacity isn't fullly used, the remaining potential $ E_{x}-DSM^{x} = \\Delta $ might be used to artificially extend the delay time if Equation $(0)$ is not violated.\n\n$ (0) \\quad demand_{t} \\quad = \\quad \\dot{E}_{t} \\quad - \\quad DSM_{t}^{up} \\quad + \\quad\n \\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} \\qquad \\forall t $\n \nThis is the the case if the remaining potential is split in half and added to both variables.\n\n\n$ (0) \\quad demand_{t} \\quad = \\quad \\dot{E}_{t}\\quad - \\quad (DSM_{t}^{up} + \\frac{1}{2} \\cdot \\Delta) \\quad + \\quad\n (\\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} +\\frac{1}{2} \\cdot \\Delta) \\qquad \\forall t $\n\nIn the following, there will be some showcases presenting the problem and its influence. ",
"_____no_output_____"
]
],
[
[
"# data preperation\nwind = [100] * timesteps\n###### edit specifics\n\n# triple extended\nwind[3:6] = [0] * 3\nwind[38:41] = [200] * 3\n\n# no interupting event\ndf_data = df_base.copy()\ndf_data['wind'] = wind\n\n#plot\nfig, ax = plt.subplots(figsize=(10,4))\nax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle=\"steps-post\")\nax = df_data.Cap_up.plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax.set_yticks(range(-100,150,50))\nax.legend(loc=9, ncol=3)\nax.set_ylabel(\"MW or % \")\nplt.show()",
"_____no_output_____"
]
],
[
[
"- 100 MW constant demand\n- 100 MW missing supply from 3 h to 6 h\n- 100 MW surpluss from 14 h to 17 h the next day\n- The delay time is set to 1 h.",
"_____no_output_____"
]
],
[
[
"# start model\n_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=1)",
"_____no_output_____"
]
],
[
[
"### what should happen:\n\n- no demand shift, as the delay time could only be realised with a delay time of 32h\n- missing demand should be fully compensated by coal power plant\n- surpluss should go to excess\n### what happens:\n\n- 50 MW of demand is shifted\n- demand shift takes place over 32 h\n\n### why:\n\n- $DSM_{t}^{up} $ & $DSM_{t}^{down} $ can be non-zero at the same time.\n- the sum of $DSM_{t}^{up} $ & $DSM_{t}^{down} $ is limited to 100 MW. Eq. (4)\n - as there is no other demand shift happening $\\Delta = 100MW$\n- $DSM_{7-32}^{up} $ & $DSM_{7-32}^{down} $ can be 50 MW at the same time.\n\n\n\n- 50% of the remaining capacity $\\Delta$ can be used to extend the delay if there is no interupting event and suits the overall objective (e.g. min_cost)",
"_____no_output_____"
],
[
"## when does it happen:\n\n- if there is any $ \\Delta > 0 $ which can be compensated\n\n- depending on the delay time\n\n - for $t_{delay} < dist < \\infty$\n \n delay time of $n$ can overcome $\\frac{n}{2}$ fully used potential \n \n - for $ dist \\leq t_{delay} $\n \n delay time of $n$ can overcome $\\frac{n}{2} + 0.5 \\cdot x $ fully used potential x = |c_dist < delaytime|\n \n",
"_____no_output_____"
],
[
"### Interrupting event\ninterupting event with -50 % wind after 1 timestep",
"_____no_output_____"
]
],
[
[
"# data preperation\nwind = [100] * timesteps\n###### edit specifics\n# triple extended\nwind[3:6] = [0] * 3\nwind[38:41] = [200] * 3\n\n# interupting event after 1 timestep\nwind[6:7] = [150]\n\ndf_data = df_base.copy()\ndf_data['wind'] = wind\n\n# plot\nfig, ax = plt.subplots(figsize=(10,4))\nax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle=\"steps-post\")\nax = df_data.Cap_up.plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax.set_yticks(range(-100,150,50))\nax.legend(loc=9, ncol=3)\nax.set_ylabel(\"MW or % \")\nplt.show()",
"_____no_output_____"
]
],
[
[
"- 100 MW constant demand\n- 100 MW missing supply from 3 h to 6 h\n- 100 MW surpluss from 14 h to 17 h the next day\n- 50 MW surplus from 6 h to 7h\n- The delay time is set to 1 h",
"_____no_output_____"
]
],
[
[
"# start model\n_ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=1)",
"_____no_output_____"
]
],
[
[
"### what should happen:\n\n- 50 MW should be shifted in between 6 h and 7h as the delay time is 1 h\n- missing demand should be fully compensated by coal power plant\n- surpluss should fully go to excess\n\n### what happens:\n\n- 50 MW of demand are shifted betwenn 6h and 7 h \n- 25 MW additional demand shift takes place over 32 h\n\n### why:\n\n- $DSM_{t}^{up} $ & $DSM_{t}^{down} $ can be non-zero at the same time.\n- 50 MW demand shift is happening.\n- the sum of $DSM_{t}^{up} $ & $DSM_{t}^{down} $ is limited to 100 MW.\n - there is still 50 MW of potential left at 7 h. $\\Delta = 50MW$\n - $DSM_{7}^{up} = \\, 75 MW $ \n - $DSM_{7}^{down} = \\, 25 MW $\n - $Eq. \\, (4) \\quad DSM_{t}^{up} \\quad + \\quad \\sum_{t=tt-L}^{tt+L} DSM_{t,tt}^{do} \\quad\n \\leq \\quad max \\{ E_{t}^{up}, E_{t}^{do} \\} \\qquad \\forall tt $\n- $ Eq. (0) \\quad demand_{t} \\quad = \\quad \\dot{E}_{t}\\quad - \\quad (DSM_{t}^{up} + \\frac{1}{2} \\cdot \\Delta) \\quad + \\quad\n (\\sum_{tt=t-L}^{t+L} DSM_{t,tt}^{do} +\\frac{1}{2} \\cdot \\Delta) \\qquad \\forall t $ ",
"_____no_output_____"
],
[
"## influence of the delay time\nvarying delay time",
"_____no_output_____"
]
],
[
[
"# data preperation\nwind = [100] * timesteps\n###### edit specifics\n# triple extended\nwind[3:8] = [0] * 5\nwind[38:41] = [200] * 3\n# interupting event\nwind[10:11] = [200]\n\nwind[13:14] = [200]\n\nwind[19:20] = [200]\n\n# plot\ndf_data = df_base.copy()\ndf_data['wind'] = wind\nfig, ax = plt.subplots(figsize=(10,4))\nax = df_data[['demand_el', 'wind']].plot(ax=ax, drawstyle=\"steps-post\")\nax = df_data.Cap_up.plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax = (df_data.Cap_do*-1).plot(ax=ax, drawstyle=\"steps-post\", secondary_y=True)\nax.set_yticks(range(-100,150,50))\nax.legend(loc=9, ncol=3)\nax.set_ylabel(\"MW or % \")\nplt.show()",
"_____no_output_____"
]
],
[
[
"- 100 MW constant demand\n- 100 MW missing supply from 3 h to 8 h\n- 100 MW surplus from 10 h to 13 h the next day. (after 32h)\n- 100 MW surplus from 10 h to 11 h (c_dist = 3)\n- 100 MW surplus from 13 h to 14 h (c_dist = 5\n- 100 MW surplus from 19 h to 20 h (c_dist = 21)\n- The delay time is set to 1 h",
"_____no_output_____"
],
[
"### when does it happen:\n- if there is any $ \\Delta > 0 $ which can be compensated\n- depending on the delay time\n\n - for $t_{delay} < dist < \\infty$\n \n delay time of $n$ can overcome $\\frac{n}{2}$ fully used potential \n \n - for $ dist \\leq t_{delay} $\n \n delay time of $n$ can overcome $\\frac{n}{2} + 0.5 \\cdot x $ fully used potential x = |c_dist < delaytime|\n \n",
"_____no_output_____"
],
[
"## iteration over delay_time",
"_____no_output_____"
]
],
[
[
"# start model\nfor i in range(7):\n _ = start_model(df_data, timesteps, plot=True, method='delay', delay_time=i, figsize=(5,5))\n plt.title('delay_time = ' + str(i))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
cb5832e71d849204f50f4b8fd700b31d73cffbd9 | 383,804 | ipynb | Jupyter Notebook | jupyter_notebooks/TCGA Batch Correction -- Final Analysis.ipynb | biocore/tcga | c4edac411182c88566df03f18bd78cac151c5059 | [
"BSD-3-Clause"
] | 60 | 2018-01-26T04:14:36.000Z | 2022-03-15T15:39:17.000Z | jupyter_notebooks/TCGA Batch Correction -- Final Analysis.ipynb | KeyuXu/tcga | c4edac411182c88566df03f18bd78cac151c5059 | [
"BSD-3-Clause"
] | 12 | 2016-08-10T22:25:04.000Z | 2021-12-12T08:08:14.000Z | jupyter_notebooks/TCGA Batch Correction -- Final Analysis.ipynb | KeyuXu/tcga | c4edac411182c88566df03f18bd78cac151c5059 | [
"BSD-3-Clause"
] | 36 | 2016-08-09T17:34:32.000Z | 2022-03-24T08:44:39.000Z | 340.553682 | 66,647 | 0.880481 | [
[
[
"import os, numpy, warnings\nimport pandas as pd",
"_____no_output_____"
],
[
"os.environ['R_HOME'] = '/home/gdpoore/anaconda3/envs/tcgaAnalysisPythonR/lib/R'\nwarnings.filterwarnings('ignore')\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
],
[
"%reload_ext rpy2.ipython",
"_____no_output_____"
],
[
"%%R\n\nrequire(ggplot2)\nrequire(snm)\nrequire(limma)\nrequire(edgeR)\nrequire(dplyr)\nrequire(edgeR)\nrequire(pvca)\nrequire(lme4)\nrequire(ggsci)\nrequire(cowplot)\nrequire(doMC)\n\nnumCores <- detectCores()\nregisterDoMC(cores=numCores)",
"_____no_output_____"
],
[
"%%R\nload(\"tcgaVbDataAndMetadataAndSNM.RData\")",
"_____no_output_____"
],
[
"%%R\nprint(dim(vbDataBarnDFReconciled))\nprint(dim(vbDataBarnDFReconciledQC))\nprint(dim(metadataSamplesAllQC))",
"_____no_output_____"
],
[
"%%R\nmetadataSamplesAllQCAML <- droplevels(metadataSamplesAll[! (is.na(metadataSamplesAll$race) | \n is.na(metadataSamplesAll$portion_is_ffpe) |\n is.na(metadataSamplesAll$age_at_diagnosis)),])\n# metadataSamplesAllQCAML <- droplevels(metadataSamplesAllQCAML[metadataSamplesAllQCAML$disease_type == \"Acute Myeloid Leukemia\",])\nvbDataBarnDFReconciledQCAML <- vbDataBarnDFReconciled[rownames(metadataSamplesAllQCAML),]\n\nprint(dim(metadataSamplesAllQCAML))\nprint(dim(vbDataBarnDFReconciledQCAML))",
"_____no_output_____"
],
[
"%%R\nqcMetadata <- metadataSamplesAllQC # metadataSamplesAllQCAML\nqcData <- vbDataBarnDFReconciledQC # vbDataBarnDFReconciledQCAML\n\n# Set up design matrix\ncovDesignNorm <- model.matrix(~0 + sample_type +\n data_submitting_center_label +\n platform +\n experimental_strategy +\n tissue_source_site_label +\n portion_is_ffpe,\n data = qcMetadata)\nprint(colnames(covDesignNorm))\ncolnames(covDesignNorm) <- gsub('([[:punct:]])|\\\\s+','',colnames(covDesignNorm))\nprint(colnames(covDesignNorm))\n\n# Set up counts matrix\ncounts <- t(qcData) # DGEList object from a table of counts (rows=features, columns=samples)\n\n# Normalize using edgeR and then plug into voom\ndge <- DGEList(counts = counts)\nkeep <- filterByExpr(dge, covDesignNorm)\ndge <- dge[keep,,keep.lib.sizes=FALSE]\nprint(\"Now normalizing data...\")\ndge <- calcNormFactors(dge, method = \"TMM\")\nprint(\"Now applying voom on normalized data...\")\nvdge <- voom(dge, design = covDesignNorm, plot = TRUE, save.plot = TRUE, normalize.method=\"none\")",
"_____no_output_____"
],
[
"%%R\n\nprint(table(metadataSamplesAllQCAML$sample_type))",
"_____no_output_____"
],
[
"%%R\n\n# Apply\nbio.var.sample.type <- model.matrix(~sample_type, #sample_type, # histological_diagnosis_label and disease_type tried but cause function to fail\n data=qcMetadata)\nbio.var.gender <- model.matrix(~gender, #sample_type, # histological_diagnosis_label and disease_type tried but cause function to fail\n data=qcMetadata)\nadj.var <- model.matrix(~data_submitting_center_label +\n platform +\n experimental_strategy +\n tissue_source_site_label +\n portion_is_ffpe,\n data=qcMetadata)\ncolnames(bio.var.sample.type) <- gsub('([[:punct:]])|\\\\s+','',colnames(bio.var.sample.type))\ncolnames(bio.var.gender) <- gsub('([[:punct:]])|\\\\s+','',colnames(bio.var.gender))\ncolnames(adj.var) <- gsub('([[:punct:]])|\\\\s+','',colnames(adj.var))\nprint(dim(adj.var))\nprint(dim(bio.var.sample.type))\nprint(dim(bio.var.gender))\nprint(dim(t(vdge$E)))\nprint(dim(covDesignNorm))",
"_____no_output_____"
],
[
"%%R\nsnmDataObjSampleTypeWithExpStrategyFA <- snm(raw.dat = vdge$E, \n bio.var = bio.var.sample.type, \n adj.var = adj.var, \n rm.adj=TRUE,\n verbose = TRUE,\n diagnose = TRUE)\nsnmDataSampleTypeWithExpStrategyFA <- t(snmDataObjSampleTypeWithExpStrategyFA$norm.dat)\n\nprint(dim(snmDataSampleTypeWithExpStrategyFA))",
"_____no_output_____"
],
[
"%%R\nsave(snmDataSampleTypeWithExpStrategyFA, file = \"snmDataSampleTypeWithExpStrategyFA.RData\")",
"_____no_output_____"
]
],
[
[
"# PCA plotting to visually examine batch effects and batch correction",
"_____no_output_____"
]
],
[
[
"%%R\npcaPlotting <- function(pcaObject,pcChoices, dataLabels, factorString, titleString){\n require(ggbiplot)\n theme_update(plot.title = element_text(hjust = 0.5))\n g <- ggbiplot(pcaObject,pcChoices, obs.scale = 1, var.scale = 1,\n groups = dataLabels, ellipse = TRUE,\n alpha = 0.2,\n circle = TRUE,var.axes=FALSE) + \n scale_color_nejm(name = factorString) +\n theme_bw() + \n #theme(legend.direction = \"horizontal\", legend.position = \"top\") +\n ggtitle(titleString) + theme(plot.title = element_text(hjust = 0.5))\n \n print(g)\n}",
"_____no_output_____"
],
[
"%%R\nunnormalizedPCAPlotFA <- pcaPlotting(pcaObject = prcomp(t(vdge$E)),\n pcChoices = c(1,2),\n dataLabels = qcMetadata$data_submitting_center_label,\n factorString = \"Batch\",\n titleString = \"PCA w/o Batch Correction\")",
"_____no_output_____"
],
[
"%%R \nsnmPCAPlotSampleTypeFA <- pcaPlotting(pcaObject = prcomp(snmDataSampleTypeWithExpStrategyFA),\n pcChoices = c(1,2),\n dataLabels = qcMetadata$data_submitting_center_label,\n factorString = \"Sequencing Center\",\n titleString = \"PCA w/ SNM Correction\\n(Target: Sample Type)\")",
"_____no_output_____"
],
[
"# %%R \n# snmPCAPlotGender <- pcaPlotting(pcaObject = prcomp(snmDataGenderWithAML),\n# pcChoices = c(1,2),\n# dataLabels = qcMetadata$data_submitting_center_label,\n# factorString = \"Sequencing Center\",\n# titleString = \"PCA w/ SNM Correction\\n(Target: Gender)\")",
"_____no_output_____"
],
[
"%%R\nggsave(plot = unnormalizedPCAPlotFA, \n filename = \"unnormalizedPCAPlotFA_DecreasedOpacity_NEJM.png\",\n width = 16.2,\n height = 5.29,\n units = \"in\",\n dpi = \"retina\")\n\nggsave(plot = snmPCAPlotSampleTypeFA, \n filename = \"snmPCAPlotSampleTypeFA_DecreasedOpacity_NEJM.png\",\n width = 16.2,\n height = 5.29,\n units = \"in\",\n dpi = \"retina\")\n\n# save(snmDataGenderWithAML, metadataSamplesAllQCAML, \n# vbDataBarnDFReconciledQCAML, \n# file = \"amlVbDataAndMetadataAndSNMByGender.RData\")",
"_____no_output_____"
],
[
"# %%R\n# snmDataObjGenderWithAML <- snm(raw.dat = vdge$E, \n# bio.var = bio.var.gender, \n# adj.var = adj.var, \n# rm.adj=TRUE,\n# verbose = TRUE,\n# diagnose = TRUE)\n# snmDataGenderWithAML <- t(snmDataObjGenderWithAML$norm.dat)\n\n# print(dim(snmDataGenderWithAML))",
"_____no_output_____"
]
],
[
[
"# PVCA using key filtered metadata features (i.e. narrowing down the extended version of this)",
"_____no_output_____"
]
],
[
[
"%%R\n# Implement PVCA\n# From extended model, remove variables that contribute very little if at all:\n# ethnicity, gender, reference_genome\npct_threshold <- 0.8\nmetaPVCAExtendedFiltered <- metadataSamplesAllQC[,c(\"sample_type\",\n \"disease_type\",\n \"data_submitting_center_label\",\n \"platform\",\n \"experimental_strategy\", \n \"tissue_source_site_label\",\n \"portion_is_ffpe\")]\nprint(dim(metaPVCAExtendedFiltered))\nprint(dim(snmDataSampleTypeWithExpStrategy))\nprint(dim(vbDataBarnDFReconciledQC))",
"_____no_output_____"
],
[
"%%R\npvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA <- PVCA(counts = t(vbDataBarnDFReconciledQC), \n meta = metaPVCAExtendedFiltered, \n threshold = pct_threshold,\n inter = FALSE)\nsave(pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA, file = \"pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA.RData\")\nPlotPVCA(pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA, \"Raw count data\")",
"_____no_output_____"
],
[
"%%R\npvcaVoomNoSNM_ExtendedFiltered_FA <- PVCA(counts = vdge$E,\n meta = metaPVCAExtendedFiltered,\n threshold = pct_threshold,\n inter = FALSE)\nsave(pvcaVoomNoSNM_ExtendedFiltered_FA, file = \"pvcaVoomNoSNM_ExtendedFiltered_FA.RData\")\nPlotPVCA(pvcaVoomNoSNM_ExtendedFiltered_FA, \"Voom Normalized\")",
"_____no_output_____"
],
[
"%%R\npvcaSampleWithExpStrategySNM_ExtendedFiltered_FA <- PVCA(counts = t(snmDataSampleTypeWithExpStrategyFA), \n meta = metaPVCAExtendedFiltered,\n threshold = pct_threshold,\n inter = FALSE)\nsave(pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA, \n file = \"pvcnoaSampleWithExpStrategySNM_ExtendedFiltered_FA.RData\")\nPlotPVCA(pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA, \n \"Voom Normalized & SNM Corrected Plus Exp Strategy (Target is Sample Type)\")",
"_____no_output_____"
],
[
"%%R\n1+2",
"_____no_output_____"
]
],
[
[
"# Examining sample and taxa ratio changes due to batch correction",
"_____no_output_____"
]
],
[
[
"%%R\nrequire(ggplot2)\nrequire(matrixStats)\ndivSNMDataSampleType <- snmDataSampleType / t(snmDataObjSampleType$raw.dat)\ntaxaMedians <- data.frame(Medians = colMedians(divSNMDataSampleType), \n Taxa = colnames(divSNMDataSampleType),\n pval = factor(ifelse(snmDataObjSampleType$pval <=0.05, \n yes = \"P-value <= 0.05\", no = \"P-value > 0.05\")))\nsampleMedians <- data.frame(Medians = rowMedians(divSNMDataSampleType), \n Samples = rownames(divSNMDataSampleType),\n SeqCenter = metadataSamplesAllQC$data_submitting_center_label,\n SampleType = metadataSamplesAllQC$sample_type,\n CancerType = metadataSamplesAllQC$disease_type)\ngt <- ggplot(taxaMedians, aes(x = reorder(Taxa, -Medians), y = Medians, fill = pval)) + \ngeom_bar(stat = \"identity\") +\ntheme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank()) +\nlabs(y = \"Median of Normalizing Ratios Per Taxa\", x = \"Samples\", fill = \"ANOVA Result Per Taxa\") \n\ngs <- ggplot(sampleMedians, aes(x = reorder(Samples, -Medians), y = Medians, fill = CancerType)) + \n geom_bar(stat = \"identity\") + coord_flip() +\n theme(axis.text.y=element_blank(), axis.ticks.y=element_blank()) +\n scale_y_log10() + labs(y = \"Median of Normalizing Ratios Per Sample\", x = \"Samples\", fill='Cancer Type') ",
"_____no_output_____"
],
[
"%%R\ngt",
"_____no_output_____"
],
[
"%%R\nggsave(plot = gt, \n filename = \"snmNormMedianPerTaxaPval.png\",\n width = 8.5,\n height = 6,\n units = \"in\",\n dpi = \"retina\")",
"_____no_output_____"
],
[
"%%R\nrequire(pheatmap)\npheatmap(snmDataSampleTypeLMFit$coefficients,\n clustering_distance_rows = \"correlation\",\n clustering_distance_cols = \"correlation\",\n show_rownames = FALSE,\n show_colnames = FALSE,\n filename = \"snmLMFitCoefCorr.png\")",
"_____no_output_____"
],
[
"# %%R\n# save(snmDataObjPathStage, snmDataPathStage, metadataSamplesAllQCPath, file = \"snmResultsPathBinned.RData\")",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb583507d3679ab83b35d74c546c4bb5310c02e8 | 4,852 | ipynb | Jupyter Notebook | notebooks/2021_tests/get pages with manually curated links.ipynb | alphagov/govuk_ab_analysis | fec954d9c90be09e1a74ced64551c2eb68b05d56 | [
"MIT"
] | null | null | null | notebooks/2021_tests/get pages with manually curated links.ipynb | alphagov/govuk_ab_analysis | fec954d9c90be09e1a74ced64551c2eb68b05d56 | [
"MIT"
] | 4 | 2021-12-01T17:29:01.000Z | 2022-01-27T16:02:50.000Z | notebooks/2021_tests/get pages with manually curated links.ipynb | alphagov/govuk_ab_analysis | fec954d9c90be09e1a74ced64551c2eb68b05d56 | [
"MIT"
] | null | null | null | 25.139896 | 253 | 0.571723 | [
[
[
"import pandas as pd\n\nimport pymongo",
"_____no_output_____"
]
],
[
[
"Use [govuk-mongodb-content](https://github.com/alphagov/govuk-mongodb-content) to setup local mongodb instance, using documentation [here](https://docs.google.com/document/d/1RhJwC79XLryOpr1ELWfG0E1eni4dGMompOjOZrDADd0/edit#heading=h.qkjm4ngtcm81)",
"_____no_output_____"
]
],
[
[
"mongo_client = pymongo.MongoClient(\"mongodb://localhost:27017/\")\n\ncontent_store_db = mongo_client['content_store']\ncontent_store_collection = content_store_db['content_items']",
"_____no_output_____"
],
[
"CONTENT_ID_PROJECTION = {\"content_id\": 1}",
"_____no_output_____"
],
[
"FILTER_HAS_MANUAL_RELATED_LINKS = {\n \"$or\": [\n# standard related links\n {\"expanded_links.ordered_related_items\": {\"$exists\": True}},\n \n# related_mainstream_content link, e.g. see /guidance/work-out-if-youll-pay-the-scottish-rate-of-income-tax\n {\"expanded_links.related_mainstream_content\": {\"$exists\": True}},\n \n# quick_links, e.g. see /government/organisations/hm-revenue-customs/contact/creative-industry-tax-reliefs\n {\"details.quick_links\": {\"$exists\": True}} \n ]}",
"_____no_output_____"
],
[
"pages_with_manual_related_links_cursor = content_store_collection.find(\n FILTER_HAS_MANUAL_RELATED_LINKS,\n CONTENT_ID_PROJECTION)",
"_____no_output_____"
],
[
"pages_with_manual_related_links = list(pages_with_manual_related_links_cursor)",
"_____no_output_____"
],
[
"manual_related_links_df = pd.DataFrame(pages_with_manual_related_links)",
"_____no_output_____"
],
[
"manual_related_links_df = manual_related_links_df.rename(\n columns = {'_id': 'page_path'})",
"_____no_output_____"
],
[
"manual_related_links_df.to_csv(\n 'manually_curated_related_links_pages.csv',\n index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb583ad64e3c3a45eeed0d4661e29ab23d43fb8d | 167,202 | ipynb | Jupyter Notebook | Clusterizacao/Sistemas_de_Recomendacao/.ipynb_checkpoints/Sistema_de_Recomendacao_de_Livros-checkpoint.ipynb | nic1611/deposito-ia | 771665c10d3dac861f33c88e5007cdabbf1589cb | [
"MIT"
] | 1 | 2021-07-23T15:44:30.000Z | 2021-07-23T15:44:30.000Z | Clusterizacao/Sistemas_de_Recomendacao/.ipynb_checkpoints/Sistema_de_Recomendacao_de_Livros-checkpoint.ipynb | nic1611/deposito-ia | 771665c10d3dac861f33c88e5007cdabbf1589cb | [
"MIT"
] | null | null | null | Clusterizacao/Sistemas_de_Recomendacao/.ipynb_checkpoints/Sistema_de_Recomendacao_de_Livros-checkpoint.ipynb | nic1611/deposito-ia | 771665c10d3dac861f33c88e5007cdabbf1589cb | [
"MIT"
] | 1 | 2021-07-23T15:44:36.000Z | 2021-07-23T15:44:36.000Z | 36.411585 | 464 | 0.332095 | [
[
[
"\n# <font color='Blue'> Ciência dos Dados na Prática</font>\n\n",
"_____no_output_____"
],
[
"# Sistemas de Recomendação",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"Cada empresa de consumo de Internet precisa um sistema de recomendação como **Netflix**, **Youtube**, **feed de notícias**, **Site de Viagens e passagens Aéreas**, **Hotéis**, **Mercado livre**, **Magalu**, **Olist**, etc. O que você deseja mostrar de uma grande variedade de itens é um sistema de recomendação.",
"_____no_output_____"
],
[
"## O que realmente é o Sistema de Recomendação?",
"_____no_output_____"
],
[
"Um mecanismo de recomendação é uma classe de aprendizado de máquina que oferece sugestões relevantes ao cliente. Antes do sistema de recomendação, a grande tendência para comprar era aceitar sugestões de amigos. Mas agora o Google sabe quais notícias você vai ler, o Youtube sabe que tipo de vídeos você vai assistir com base em seu histórico de pesquisa, histórico de exibição ou histórico de compra.",
"_____no_output_____"
],
[
"Um sistema de recomendação ajuda uma organização a criar clientes fiéis e construir a confiança deles nos produtos e serviços desejados para os quais vieram em seu site. Os sistemas de recomendação de hoje são tão poderosos que também podem lidar com o novo cliente que visitou o site pela primeira vez. Eles recomendam os produtos que estão em alta ou com alta classificação e também podem recomendar os produtos que trazem o máximo de lucro para a empresa.",
"_____no_output_____"
],
[
"Um sistema de recomendação de livros é um tipo de sistema de recomendação em que temos que recomendar livros semelhantes ao leitor com base em seu interesse. O sistema de recomendação de livros é usado por sites online que fornecem e-books como google play books, open library, good Read's, etc.",
"_____no_output_____"
],
[
"# 1° Problema de Negócio",
"_____no_output_____"
],
[
"Usaremos o método de **filtragem baseada em colaboração** para construir um sistema de recomendação de livros. Ou seja, precisamos construir uma máquina preditiva que, **com base nas escolhas de leituras de outras pessoas, o livro seja recomendado a outras pessoas com interesses semelhantes.**\n\n",
"_____no_output_____"
],
[
"Ex:\n\n**Eduardo** leu e gostou dos livros A loja de Tudo e Elon Musk.\n\n**Clarice** também leu e gostou desses dois livros\n\n",
"_____no_output_____"
],
[
"\n\n",
"_____no_output_____"
],
[
"Agora o **Eduardo** leu e gostou do livro \"StartUp de U$100\" que não é lido pela **Clarice**. \n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"\nEntão **temos que recomendar o livro **\"StartUp de U$100\" para **Clarice**\n\n\n",
"_____no_output_____"
],
[
"## **Resultado**\n\n Você concorda que se vc receber uma recomendação certeira, a chance de vc comprar o livro é muito maior?\n\n Vc concorda que se mais pessoas comprarem, maior será o faturamento da empresa?\n\n Vc concorda que os clientes vão ficar muito mais satisfeitos se o site demonstrar que conhece ela e que realmente só oferece produtos que realmente são relevantes p ela?",
"_____no_output_____"
],
[
"# 2° Análise Exploratória dos Dados\n",
"_____no_output_____"
]
],
[
[
"#Importação das Bibliotecas ou Pacotes\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import NearestNeighbors",
"_____no_output_____"
]
],
[
[
"Fonte de Dados:\n\nhttps://www.kaggle.com/rxsraghavagrawal/book-recommender-system",
"_____no_output_____"
],
[
"#### Base de Livros",
"_____no_output_____"
]
],
[
[
"# Importação dos Dados Referentes aos Livros\nbooks = pd.read_csv(\"BX-Books.csv\", sep=';', encoding=\"latin-1\", error_bad_lines= False)\n",
"C:\\Users\\Nicolas\\miniconda3\\envs\\datascience\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3441: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version.\n\n\n exec(code_obj, self.user_global_ns, self.user_ns)\nb'Skipping line 6452: expected 8 fields, saw 9\\nSkipping line 43667: expected 8 fields, saw 10\\nSkipping line 51751: expected 8 fields, saw 9\\n'\nb'Skipping line 92038: expected 8 fields, saw 9\\nSkipping line 104319: expected 8 fields, saw 9\\nSkipping line 121768: expected 8 fields, saw 9\\n'\nb'Skipping line 144058: expected 8 fields, saw 9\\nSkipping line 150789: expected 8 fields, saw 9\\nSkipping line 157128: expected 8 fields, saw 9\\nSkipping line 180189: expected 8 fields, saw 9\\nSkipping line 185738: expected 8 fields, saw 9\\n'\nb'Skipping line 209388: expected 8 fields, saw 9\\nSkipping line 220626: expected 8 fields, saw 9\\nSkipping line 227933: expected 8 fields, saw 11\\nSkipping line 228957: expected 8 fields, saw 10\\nSkipping line 245933: expected 8 fields, saw 9\\nSkipping line 251296: expected 8 fields, saw 9\\nSkipping line 259941: expected 8 fields, saw 9\\nSkipping line 261529: expected 8 fields, saw 9\\n'\nC:\\Users\\Nicolas\\miniconda3\\envs\\datascience\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3441: DtypeWarning: Columns (3) have mixed types.Specify dtype option on import or set low_memory=False.\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
],
[
"books",
"_____no_output_____"
]
],
[
[
"#### Base de Usuários",
"_____no_output_____"
]
],
[
[
"# Importação dos Dados Referentes aos Usuários\nusers = pd.read_csv(\"BX-Users.csv\", sep=';', encoding=\"latin-1\", error_bad_lines= False)\n",
"C:\\Users\\Nicolas\\miniconda3\\envs\\datascience\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3441: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version.\n\n\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
],
[
"users",
"_____no_output_____"
]
],
[
[
"#### Base de Ratings",
"_____no_output_____"
]
],
[
[
"# Importação dos Dados Referentes aos Ratings dados aos Livros (Avaliação do Usuário em relação ao Livro)\nratings = pd.read_csv(\"BX-Book-Ratings.csv\", sep=';', encoding=\"latin-1\", error_bad_lines= False)",
"C:\\Users\\Nicolas\\miniconda3\\envs\\datascience\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3441: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version.\n\n\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
],
[
"ratings.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1149780 entries, 0 to 1149779\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 User-ID 1149780 non-null int64 \n 1 ISBN 1149780 non-null object\n 2 Book-Rating 1149780 non-null int64 \ndtypes: int64(2), object(1)\nmemory usage: 26.3+ MB\n"
]
],
[
[
"# 3° Pré-Processamento dos Dados",
"_____no_output_____"
],
[
"### Renomeando Colunas",
"_____no_output_____"
],
[
"Agora, no arquivo de livros, temos algumas colunas extras que não são necessárias para nossa tarefa, como URLs de imagens. E vamos renomear as colunas de cada arquivo, pois o nome da coluna contém espaço e letras maiúsculas, então faremos as correções para facilitar o uso.",
"_____no_output_____"
]
],
[
[
"# Rename de Colunas\nbooks = books[['ISBN', 'Book-Title', 'Book-Author', 'Year-Of-Publication', 'Publisher']]\nbooks.rename(columns = {'Book-Title':'title', 'Book-Author':'author', 'Year-Of-Publication':'year', 'Publisher':'publisher'}, inplace=True)\nusers.rename(columns = {'User-ID':'user_id', 'Location':'location', 'Age':'age'}, inplace=True)\nratings.rename(columns = {'User-ID':'user_id', 'Book-Rating':'rating'}, inplace=True)",
"C:\\Users\\Nicolas\\miniconda3\\envs\\datascience\\lib\\site-packages\\pandas\\core\\frame.py:5034: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().rename(\n"
],
[
"books",
"_____no_output_____"
],
[
"#Quantidade de Ratings por Usuários\nratings['user_id'].value_counts()",
"_____no_output_____"
],
[
"# Livros que tenham mais de 200 avaliações\nx = ratings['user_id'].value_counts() > 200",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"# Quantidade Usuários\n# user_ids\ny = x[x].index \nprint(y.shape)",
"(899,)\n"
],
[
"y",
"_____no_output_____"
]
],
[
[
"#### *Decisão de Negócio*",
"_____no_output_____"
]
],
[
[
"# Trazendo ratings somente dos usuários q avaliaram mais de 200 livros\nratings = ratings[ratings['user_id'].isin(y)]",
"_____no_output_____"
],
[
"ratings",
"_____no_output_____"
],
[
"# Juntando tabelas (Join ou Merge)\nrating_with_books = ratings.merge(books, on='ISBN')\nrating_with_books.head()",
"_____no_output_____"
],
[
"#Quantidade de rating dos livros\nnumber_rating = rating_with_books.groupby('title')['rating'].count().reset_index()\n",
"_____no_output_____"
],
[
"number_rating",
"_____no_output_____"
],
[
"#Renomeando coluna\nnumber_rating.rename(columns= {'rating':'number_of_ratings'}, inplace=True)\nnumber_rating\n",
"_____no_output_____"
],
[
"# Juntando a tabela de livros com os Ratings com a tabela de quantidade de ratings por livro\nfinal_rating = rating_with_books.merge(number_rating, on='title')\nfinal_rating",
"_____no_output_____"
]
],
[
[
"#### *Decisão de Negócio*",
"_____no_output_____"
]
],
[
[
"# Filtrar somente livros que tenham pelo menos 50 avaliações\nfinal_rating = final_rating[final_rating['number_of_ratings'] >= 50]\nfinal_rating.shape",
"_____no_output_____"
],
[
"# Vamos descartar os valores duplicados, porque se o mesmo usuário tiver avaliado o mesmo livro várias vezes, isso pode dar rúim.\nfinal_rating.drop_duplicates(['user_id','title'], inplace=True)\nfinal_rating.shape",
"C:\\Users\\Nicolas\\miniconda3\\envs\\datascience\\lib\\site-packages\\pandas\\util\\_decorators.py:311: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return func(*args, **kwargs)\n"
],
[
"final_rating",
"_____no_output_____"
]
],
[
[
"### Vamos fazer uma parada que é o seguinte:\n\nVamos transpor os **usuários** em **colunas**, ao invés de linhas, pois as avaliações dadas por eles serão as **variáveis** da máquina preditiva.",
"_____no_output_____"
]
],
[
[
"final_rating.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 59850 entries, 0 to 236705\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 user_id 59850 non-null int64 \n 1 ISBN 59850 non-null object\n 2 rating 59850 non-null int64 \n 3 title 59850 non-null object\n 4 author 59850 non-null object\n 5 year 59850 non-null object\n 6 publisher 59850 non-null object\n 7 number_of_ratings 59850 non-null int64 \ndtypes: int64(3), object(5)\nmemory usage: 4.1+ MB\n"
],
[
"# Transposição de linhas(users_id) em colunas\nbook_pivot = final_rating.pivot_table(columns='user_id', index='title', values=\"rating\")",
"_____no_output_____"
],
[
"book_pivot",
"_____no_output_____"
],
[
"book_pivot.shape",
"_____no_output_____"
],
[
"book_pivot.fillna(0, inplace=True)",
"_____no_output_____"
],
[
"book_pivot",
"_____no_output_____"
]
],
[
[
"Preparamos nosso conjunto de dados para modelagem. Usaremos o algoritmo de vizinhos mais próximos (nearest neighbors algorithm), que é usado para agrupamento com base na **distância euclidiana**.\n\n",
"_____no_output_____"
],
[
"**Nesta aula explicadim**:\n\nhttps://www.youtube.com/watch?v=jD4AKp4-Tmo\n\n\n",
"_____no_output_____"
],
[
"\nMas aqui na tabela dinâmica, temos muitos valores zero e no agrupamento, esse poder de computação aumentará para calcular a distância dos valores zero, portanto, converteremos a tabela dinâmica para a matriz esparsa e, em seguida, alimentaremos o modelo.",
"_____no_output_____"
]
],
[
[
"from scipy.sparse import csr_matrix\nbook_sparse = csr_matrix(book_pivot)",
"_____no_output_____"
]
],
[
[
"#4° Criação da Máquina Preditiva",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/neighbors.html",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import NearestNeighbors\nmodel = NearestNeighbors(algorithm='brute')\nmodel.fit(book_sparse)",
"_____no_output_____"
]
],
[
[
"## Novas Predições",
"_____no_output_____"
]
],
[
[
"#1984\ndistances, suggestions = model.kneighbors(book_pivot.iloc[0, :].values.reshape(1, -1))",
"_____no_output_____"
],
[
"book_pivot.head()",
"_____no_output_____"
],
[
"for i in range(len(suggestions)):\n print(book_pivot.index[suggestions[i]])",
"Index(['1984', 'No Safe Place', 'A Civil Action', 'Foucault's Pendulum',\n 'Long After Midnight'],\n dtype='object', name='title')\n"
],
[
"#Hannibal\ndistances, suggestions = model.kneighbors(book_pivot.iloc[236, :].values.reshape(1, -1))",
"_____no_output_____"
],
[
"book_pivot.head(236)",
"_____no_output_____"
],
[
"for i in range(len(suggestions)):\n print(book_pivot.index[suggestions[i]])",
"Index(['Hard Eight : A Stephanie Plum Novel (A Stephanie Plum Novel)',\n 'Seven Up (A Stephanie Plum Novel)',\n 'Hot Six : A Stephanie Plum Novel (A Stephanie Plum Novel)',\n 'The Next Accident', 'The Mulberry Tree'],\n dtype='object', name='title')\n"
],
[
"#Harry Potter\ndistances, suggestions = model.kneighbors(book_pivot.iloc[238, :].values.reshape(1, -1))",
"_____no_output_____"
],
[
"book_pivot.head(238)",
"_____no_output_____"
],
[
"for i in range(len(suggestions)):\n print(book_pivot.index[suggestions[i]])",
"Index(['Harry Potter and the Goblet of Fire (Book 4)',\n 'Harry Potter and the Prisoner of Azkaban (Book 3)',\n 'Harry Potter and the Order of the Phoenix (Book 5)',\n 'The Cradle Will Fall', 'Exclusive'],\n dtype='object', name='title')\n"
]
],
[
[
"# Fim",
"_____no_output_____"
],
[
"## Valeu!",
"_____no_output_____"
],
[
"Fonte de Inspiração:",
"_____no_output_____"
],
[
"https://www.analyticsvidhya.com/blog/2021/06/build-book-recommendation-system-unsupervised-learning-project/",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb5844807e9138453d8bcc657a54f02ae3bb968f | 28,869 | ipynb | Jupyter Notebook | article/03.transporter_stats_taxprofiles.ipynb | johnne/transporters | abec19436b50ad8275d13b15dc5ef17583356995 | [
"MIT"
] | 2 | 2016-09-22T12:56:19.000Z | 2020-11-13T14:45:52.000Z | article/03.transporter_stats_taxprofiles.ipynb | johnne/transporters | abec19436b50ad8275d13b15dc5ef17583356995 | [
"MIT"
] | 3 | 2015-11-21T12:09:26.000Z | 2015-11-23T16:12:58.000Z | article/03.transporter_stats_taxprofiles.ipynb | johnne/transporters | abec19436b50ad8275d13b15dc5ef17583356995 | [
"MIT"
] | 1 | 2020-02-21T12:47:58.000Z | 2020-02-21T12:47:58.000Z | 33.374566 | 212 | 0.597908 | [
[
[
"# Transporter statistics and taxonomic profiles",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"In this notebook some overview statistics of the datasets are computed and taxonomic profiles investigated. The notebook uses data produced by running the [01.process_data](01.process_data.ipynb) notebook.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport glob\nimport os\nimport matplotlib.pyplot as plt, matplotlib\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nplt.style.use('ggplot')",
"_____no_output_____"
],
[
"def make_tax_table(df,name=\"\",rank=\"superkingdom\"):\n df_t = df.groupby(rank).sum()\n df_tp = df_t.div(df_t.sum())*100\n df_tp_mean = df_tp.mean(axis=1)\n df_tp_max = df_tp.max(axis=1)\n df_tp_min = df_tp.min(axis=1)\n df_tp_sd = df_tp.std(axis=1)\n table = pd.concat([df_tp_mean,df_tp_max,df_tp_min,df_tp_sd],axis=1)\n table.columns = [name+\" mean(%)\",name+\" max(%)\",name+\" min(%)\",name+\" std\"]\n table.rename(index=lambda x: x.split(\"_\")[0], inplace=True)\n return table",
"_____no_output_____"
]
],
[
[
"## Load the data",
"_____no_output_____"
]
],
[
[
"transinfo = pd.read_csv(\"selected_transporters_classified.tab\", header=0, sep=\"\\t\", index_col=0)\ntransinfo.head()",
"_____no_output_____"
]
],
[
[
"Read gene abundance values with taxonomic annotations.",
"_____no_output_____"
]
],
[
[
"mg_cov = pd.read_table(\"data/mg/all_genes.tpm.taxonomy.tsv.gz\", header=0, sep=\"\\t\", index_col=0)\nmt_cov = pd.read_table(\"data/mt/all_genes.tpm.taxonomy.tsv.gz\", header=0, sep=\"\\t\", index_col=0)",
"_____no_output_____"
]
],
[
[
"Read orf level transporter data.",
"_____no_output_____"
]
],
[
[
"mg_transcov = pd.read_table(\"results/mg/all_transporters.tpm.taxonomy.tsv.gz\", header=0, sep=\"\\t\", index_col=0)\nmt_transcov = pd.read_table(\"results/mt/all_transporters.tpm.taxonomy.tsv.gz\", header=0, sep=\"\\t\", index_col=0)",
"_____no_output_____"
],
[
"mg_select_transcov = pd.read_table(\"results/mg/select_trans_genes.tpm.tsv\", header=0, sep=\"\\t\", index_col=0)\nmt_select_transcov = pd.read_table(\"results/mt/select_trans_genes.tpm.tsv\", header=0, sep=\"\\t\", index_col=0)",
"_____no_output_____"
]
],
[
[
"Read transporter abundances.",
"_____no_output_____"
]
],
[
[
"mg_trans = pd.read_csv(\"results/mg/all_trans.tpm.tsv\", header=0, sep=\"\\t\", index_col=0)\nmt_trans = pd.read_csv(\"results/mt/all_trans.tpm.tsv\", header=0, sep=\"\\t\", index_col=0)",
"_____no_output_____"
]
],
[
[
"## Generate taxonomic overview table",
"_____no_output_____"
]
],
[
[
"mg_tax_table = make_tax_table(mg_cov,name=\"MG \")\nmg_tax_table_cyano = make_tax_table(mg_cov,name=\"MG \",rank=\"phylum\").loc[\"Cyanobacteria\"]\nmg_tax_table = pd.concat([mg_tax_table,pd.DataFrame(mg_tax_table_cyano).T])\nmg_tax_table",
"_____no_output_____"
],
[
"mt_tax_table = make_tax_table(mt_cov,name=\"MT \")\nmt_tax_table_cyano = make_tax_table(mt_cov,name=\"MT \",rank=\"phylum\").loc[\"Cyanobacteria\"]\nmt_tax_table = pd.concat([mt_tax_table,pd.DataFrame(mt_tax_table_cyano).T])\nmt_tax_table",
"_____no_output_____"
]
],
[
[
"Concatenate overview tables. This is **Table 2** in the paper.",
"_____no_output_____"
]
],
[
[
"tax_table = pd.concat([mg_tax_table,mt_tax_table],axis=1).round(2)",
"_____no_output_____"
],
[
"tax_table.to_csv(\"results/Table2.tsv\",sep=\"\\t\")",
"_____no_output_____"
]
],
[
[
"## Generate general overview of transporters",
"_____no_output_____"
],
[
"Make table with number of ORFs, ORFs classified as transporters, min, mean and max coverage for transporter ORFs.",
"_____no_output_____"
]
],
[
[
"num_genes = len(mg_cov)\ngene_lengths = pd.read_table(\"data/mg/all_genes.tpm.tsv.gz\", usecols=[1])\ngene_lengths = np.round(gene_lengths.mean())",
"_____no_output_____"
],
[
"def generate_transporter_stats(df):\n # Number of transporter genes (genes with sum > 0)\n num_trans_genes = len(df.loc[df.groupby(level=0).sum().sum(axis=1)>0])\n # Percent of transporter genes\n num_trans_genes_p = np.round((num_trans_genes / float(num_genes))*100,2)\n # Mean total coverage for transporter genes across the samples\n transcov_mean = np.round(((df.groupby(level=0).sum().sum().mean()) / 1e6)*100,2)\n # Minimum total coverage for transporter genes across the samples\n transcov_min = np.round(((df.groupby(level=0).sum().sum().min()) / 1e6)*100,2)\n # Maximum ...\n transcov_max = np.round(((df.groupby(level=0).sum().sum().max()) / 1e6)*100,2)\n # Standard dev\n transcov_std = np.round(((df.groupby(level=0).sum().sum() / 1e6)*100).std(),2)\n return num_trans_genes, num_trans_genes_p, transcov_mean, transcov_min, transcov_max, transcov_std",
"_____no_output_____"
],
[
"mg_num_trans_genes, mg_num_trans_genes_p, mg_transcov_mean, mg_transcov_min, mg_transcov_max, mg_transcov_std = generate_transporter_stats(mg_transcov)",
"_____no_output_____"
],
[
"mt_num_trans_genes, mt_num_trans_genes_p, mt_transcov_mean, mt_transcov_min, mt_transcov_max, mt_transcov_std = generate_transporter_stats(mt_transcov)",
"_____no_output_____"
]
],
[
[
"Create table with transporter statistics for MG and MT datasets (**Table 3** in the paper).",
"_____no_output_____"
]
],
[
[
"stats_df = pd.DataFrame(data={\n \"Transporter genes\": [\"{} ({}%)\".format(mg_num_trans_genes,mg_num_trans_genes_p),\"{} ({}%)\".format(mt_num_trans_genes,mt_num_trans_genes_p)],\n \"Transporter mean\": [\"{}%\".format(mg_transcov_mean),\"{}%\".format(mt_transcov_mean)],\n \"Transporter min\": [\"{}%\".format(mg_transcov_min),\"{}%\".format(mt_transcov_min)],\n \"Transporter max\": [\"{}%\".format(mg_transcov_max),\"{}%\".format(mt_transcov_max)],\n \"Transporter std\": [\"{}%\".format(mg_transcov_std),\"{}%\".format(mt_transcov_std)]},index=[\"MG\",\"MT\"]).T\nstats_df.to_csv(\"results/Table3.tsv\",sep=\"\\t\")\nstats_df",
"_____no_output_____"
]
],
[
[
"Do the same with the selected transporters.",
"_____no_output_____"
]
],
[
[
"mg_select_num_trans_genes, mg_select_num_trans_genes_p, mg_select_transcov_mean, mg_select_transcov_min, mg_select_transcov_max, mg_select_transcov_std = generate_transporter_stats(mg_select_transcov)",
"_____no_output_____"
],
[
"mt_select_num_trans_genes, mt_select_num_trans_genes_p, mt_select_transcov_mean, mt_select_transcov_min, mt_select_transcov_max, mt_select_transcov_std = generate_transporter_stats(mt_select_transcov)",
"_____no_output_____"
],
[
"select_stats_df = pd.DataFrame(data={\n \"Selected transporter genes\": [\"{} ({}%)\".format(mg_select_num_trans_genes,mg_select_num_trans_genes_p),\"{} ({}%)\".format(mt_select_num_trans_genes,mt_select_num_trans_genes_p)],\n \"Selected transporter mean\": [\"{}%\".format(mg_select_transcov_mean),\"{}%\".format(mt_select_transcov_mean)],\n \"Selected transporter min\": [\"{}%\".format(mg_select_transcov_min),\"{}%\".format(mt_select_transcov_min)],\n \"Selected transporter max\": [\"{}%\".format(mg_select_transcov_max),\"{}%\".format(mt_select_transcov_max)],\n \"Selected transporter std\": [\"{}%\".format(mg_select_transcov_std),\"{}%\".format(mt_select_transcov_std)]},index=[\"mg_select\",\"mt_select\"]).T\nselect_stats_df.to_csv(\"results/selected_transporter_stats.tab\",sep=\"\\t\")\nselect_stats_df",
"_____no_output_____"
]
],
[
[
"## Generate kingdom/phylum level taxonomic plots",
"_____no_output_____"
]
],
[
[
"def get_euk_taxa(taxa, df, rank):\n euk_taxa = []\n for t in taxa:\n k = df.loc[df[rank]==t, \"superkingdom\"].unique()[0]\n if k==\"Eukaryota\":\n euk_taxa.append(t)\n return euk_taxa",
"_____no_output_____"
],
[
"def set_euk_hatches(ax):\n for patch in ax.patches:\n t = color2taxmap[patch.properties()['facecolor'][0:-1]]\n if t in euk_taxa:\n patch.set_hatch(\"////\")",
"_____no_output_____"
]
],
[
[
"Generate profiles for metagenomes.",
"_____no_output_____"
]
],
[
[
"# Get sum of abundances at superkingdom level\nmg_k = mg_cov.groupby(\"superkingdom\").sum()\n# Normalize to %\nmg_kn = mg_k.div(mg_k.sum())*100\nmg_kn = mg_kn.loc[[\"Archaea\",\"Bacteria\",\"Eukaryota\",\"Viruses\",\"Unclassified.sequences\",\"other sequences\"]]\nmg_kn = mg_kn.loc[mg_kn.sum(axis=1).sort_values(ascending=False).index]\n# Swtich Proteobacterial classes to phylum\nmg_cov.loc[mg_cov.phylum==\"Proteobacteria\",\"phylum\"] = mg_cov.loc[mg_cov.phylum==\"Proteobacteria\",\"class\"]\n# Normalize at phylum level\nmg_p = mg_cov.groupby(\"phylum\").sum()\nmg_pn = mg_p.div(mg_p.sum())*100",
"_____no_output_____"
],
[
"_ = mg_pn.mean(axis=1).sort_values(ascending=False)\n_.loc[~_.index.str.contains(\"Unclassified\")].head(8)",
"_____no_output_____"
]
],
[
[
"Create the taxonomic overview of the 7 most abundant phyla in the metagenomic dataset. This is **Figure 1** in the paper.",
"_____no_output_____"
]
],
[
[
"select_taxa = [\"Verrucomicrobia\",\"Actinobacteria\",\"Alphaproteobacteria\",\"Gammaproteobacteria\",\"Cyanobacteria\",\"Bacteroidetes\",\"Betaproteobacteria\"]\n# Sort taxa by mean abundance\ntaxa_order = mg_pn.loc[select_taxa].mean(axis=1).sort_values(ascending=False).index\nax = mg_pn.loc[taxa_order].T.plot(kind=\"area\",stacked=True)\nax.legend(bbox_to_anchor=(1,1))\nax.set_ylabel(\"% normalized abundance\");\nxticks = list(range(0,33))\nax.set_xticks(xticks);\nax.set_xticklabels(mg_pn.columns, rotation=90);\nplt.savefig(\"results/Figure1.svg\", bbox_inches=\"tight\")",
"_____no_output_____"
]
],
[
[
"Generate profiles for metatranscriptomes.",
"_____no_output_____"
]
],
[
[
"# Get sum of abundances at superkingdom level\nmt_k = mt_cov.groupby(\"superkingdom\").sum()\n# Normalize to %\nmt_kn = mt_k.div(mt_k.sum())*100\nmt_kn = mt_kn.loc[[\"Archaea\",\"Bacteria\",\"Eukaryota\",\"Viruses\",\"Unclassified.sequences\",\"other sequences\"]]\nmt_kn = mt_kn.loc[mt_kn.sum(axis=1).sort_values(ascending=False).index]\n# Swtich Proteobacterial classes to phylum\nmt_cov.loc[mt_cov.phylum==\"Proteobacteria\",\"phylum\"] = mt_cov.loc[mt_cov.phylum==\"Proteobacteria\",\"class\"]\n# Normalize at phylum level\nmt_p = mt_cov.groupby(\"phylum\").sum()\nmt_pn = mt_p.div(mt_p.sum())*100",
"_____no_output_____"
]
],
[
[
"Get common taxa for both datasets by taking the union of the top 15 most abundant taxa",
"_____no_output_____"
]
],
[
[
"mg_taxa = mg_pn.mean(axis=1).sort_values(ascending=False).head(15).index\nmt_taxa = mt_pn.mean(axis=1).sort_values(ascending=False).head(15).index\ntaxa = set(mg_taxa).union(set(mt_taxa))",
"_____no_output_____"
]
],
[
[
"Single out eukaryotic taxa",
"_____no_output_____"
]
],
[
[
"euk_taxa = get_euk_taxa(taxa, mg_cov, rank=\"phylum\")",
"_____no_output_____"
]
],
[
[
"Sort the taxa by their mean abundance in the mg data",
"_____no_output_____"
]
],
[
[
"taxa_sort = mg_pn.loc[taxa].mean(axis=1).sort_values(ascending=False).index\ntaxa_colors = dict(zip(taxa_sort,(sns.color_palette(\"Set1\",7)+sns.color_palette(\"Set2\",7)+sns.color_palette(\"Dark2\",5))))\ncolor2taxmap = {}\nfor t, c in taxa_colors.items():\n color2taxmap[c] = t",
"_____no_output_____"
]
],
[
[
"Plot metagenome profiles",
"_____no_output_____"
]
],
[
[
"fig,axes = plt.subplots(ncols=2,nrows=1, figsize=(12,4))\n# Plot the kingdoms\nax1 = mg_kn.T.plot(kind=\"bar\",stacked=True,ax=axes[0])\nax1.legend(loc=\"lower right\",fontsize=\"small\")\nax1.set_ylabel(\"%\")\n\n# Plot the phyla\nax2 = mg_pn.loc[taxa_sort].T.plot(kind=\"bar\",stacked=True, color=[taxa_colors[tax] for tax in taxa_sort], legend=None,ax=axes[1])\nset_euk_hatches(ax2)\nax2.set_ylabel(\"%\")\nax2.legend(bbox_to_anchor=(1,1),fontsize=\"small\");",
"_____no_output_____"
]
],
[
[
"Plot metatranscriptome profiles",
"_____no_output_____"
]
],
[
[
"fig,axes = plt.subplots(ncols=2,nrows=1, figsize=(12,4))\n# Plot the kingdoms\nax1 = mt_kn.T.plot(kind=\"bar\",stacked=True,ax=axes[0])\nax1.legend(loc=\"lower center\",fontsize=\"small\")\nax1.set_ylabel(\"%\")\n\n# Plot the phyla\nax2 = mt_pn.loc[taxa_sort].T.plot(kind=\"bar\",stacked=True, color=[taxa_colors[tax] for tax in taxa_sort], legend=None,ax=axes[1])\nset_euk_hatches(ax2)\nax2.set_ylabel(\"%\")\nax2.legend(bbox_to_anchor=(1,1),fontsize=\"small\");",
"_____no_output_____"
]
],
[
[
"Calculate total number of orders.",
"_____no_output_____"
]
],
[
[
"mg_ordersum = mg_cov.groupby(\"order\").sum()\nmg_total_orders = len(mg_ordersum.loc[mg_ordersum.sum(axis=1)>0])\nprint(\"{} orders in the entire mg dataset\".format(mg_total_orders))\n\nmg_trans_ordersum = mg_select_transcov.groupby(\"order\").sum()\nmg_trans_total_orders = len(mg_trans_ordersum.loc[mg_trans_ordersum.sum(axis=1)>0])\nprint(\"{} orders in the transporter mg dataset\".format(mg_trans_total_orders))",
"_____no_output_____"
],
[
"mt_ordersum = mt_cov.groupby(\"order\").sum()\nmt_total_orders = len(mt_ordersum.loc[mt_ordersum.sum(axis=1)>0])\nprint(\"{} orders in the entire mt dataset\".format(mt_total_orders))\n\nmt_trans_ordersum = mt_select_transcov.groupby(\"order\").sum()\nmt_trans_total_orders = len(mt_trans_ordersum.loc[mt_trans_ordersum.sum(axis=1)>0])\nprint(\"{} orders in the transporter mt dataset\".format(mt_trans_total_orders))",
"_____no_output_____"
]
],
[
[
"## Calculate and plot distributions per taxonomic subsets.",
"_____no_output_____"
],
[
"Extract ORFs belonging to each subset.",
"_____no_output_____"
]
],
[
[
"cya_orfs = mg_transcov.loc[mg_transcov.phylum==\"Cyanobacteria\"].index\nbac_orfs = mg_transcov.loc[(mg_transcov.phylum!=\"Cyanobacteria\")&(mg_transcov.superkingdom==\"Bacteria\")].index\neuk_orfs = mg_transcov.loc[mg_transcov.superkingdom==\"Eukaryota\"].index",
"_____no_output_____"
]
],
[
[
"Calculate contribution of taxonomic subsets to the identified transporters.",
"_____no_output_____"
]
],
[
[
"taxgroup_df = pd.DataFrame(columns=[\"MG\",\"MT\"],index=[\"Bacteria\",\"Cyanobacteria\",\"Eukaryota\"])",
"_____no_output_____"
],
[
"mg_all_transcov_info = pd.merge(transinfo,mg_transcov,left_index=True,right_on=\"transporter\")\nmg_bac_transcov_info = pd.merge(transinfo,mg_transcov.loc[bac_orfs],left_index=True,right_on=\"transporter\")\nmg_euk_transcov_info = pd.merge(transinfo,mg_transcov.loc[euk_orfs],left_index=True,right_on=\"transporter\")\nmg_cya_transcov_info = pd.merge(transinfo,mg_transcov.loc[cya_orfs],left_index=True,right_on=\"transporter\")",
"_____no_output_____"
],
[
"mt_all_transcov_info = pd.merge(transinfo,mt_transcov,left_index=True,right_on=\"transporter\")\nmt_bac_transcov_info = pd.merge(transinfo,mt_transcov.loc[bac_orfs],left_index=True,right_on=\"transporter\")\nmt_euk_transcov_info = pd.merge(transinfo,mt_transcov.loc[euk_orfs],left_index=True,right_on=\"transporter\")\nmt_cya_transcov_info = pd.merge(transinfo,mt_transcov.loc[cya_orfs],left_index=True,right_on=\"transporter\")",
"_____no_output_____"
],
[
"mg_cya_part = mg_cya_transcov_info.groupby(\"transporter\").sum().sum().div(mg_all_transcov_info.groupby(\"transporter\").sum().sum())*100\nmi,ma,me = mg_cya_part.min(),mg_cya_part.max(),mg_cya_part.mean()\ntaxgroup_df.loc[\"Cyanobacteria\",\"MG\"] = \"{}% ({}-{}%)\".format(round(me,2),round(mi,2),round(ma,2))\n\nmg_euk_part = mg_euk_transcov_info.groupby(\"transporter\").sum().sum().div(mg_all_transcov_info.groupby(\"transporter\").sum().sum())*100\nmi,ma,me = mg_euk_part.min(),mg_euk_part.max(),mg_euk_part.mean()\ntaxgroup_df.loc[\"Eukaryota\",\"MG\"] = \"{}% ({}-{}%)\".format(round(me,2),round(mi,2),round(ma,2))\n\nmg_bac_part = mg_bac_transcov_info.groupby(\"transporter\").sum().sum().div(mg_all_transcov_info.groupby(\"transporter\").sum().sum())*100\nmi,ma,me = mg_bac_part.min(),mg_bac_part.max(),mg_bac_part.mean()\ntaxgroup_df.loc[\"Bacteria\",\"MG\"] = \"{}% ({}-{}%)\".format(round(me,2),round(mi,2),round(ma,2))",
"_____no_output_____"
],
[
"mt_cya_part = mt_cya_transcov_info.groupby(\"transporter\").sum().sum().div(mt_all_transcov_info.groupby(\"transporter\").sum().sum())*100\nmi,ma,me = mt_cya_part.min(),mt_cya_part.max(),mt_cya_part.mean()\ntaxgroup_df.loc[\"Cyanobacteria\",\"MT\"] = \"{}% ({}-{}%)\".format(round(me,2),round(mi,2),round(ma,2))\n\nmt_euk_part = mt_euk_transcov_info.groupby(\"transporter\").sum().sum().div(mt_all_transcov_info.groupby(\"transporter\").sum().sum())*100\nmi,ma,me = mt_euk_part.min(),mt_euk_part.max(),mt_euk_part.mean()\ntaxgroup_df.loc[\"Eukaryota\",\"MT\"] = \"{}% ({}-{}%)\".format(round(me,2),round(mi,2),round(ma,2))\n\nmt_bac_part = mt_bac_transcov_info.groupby(\"transporter\").sum().sum().div(mt_all_transcov_info.groupby(\"transporter\").sum().sum())*100\nmi,ma,me = mt_bac_part.min(),mt_bac_part.max(),mt_bac_part.mean()\ntaxgroup_df.loc[\"Bacteria\",\"MT\"] = \"{}% ({}-{}%)\".format(round(me,2),round(mi,2),round(ma,2))",
"_____no_output_____"
],
[
"taxgroup_df",
"_____no_output_____"
]
],
[
[
"### Taxonomic subsets per substrate category",
"_____no_output_____"
]
],
[
[
"def calculate_mean_total_substrate_subset(df,df_sum,subset,var_name=\"Sample\",value_name=\"%\"):\n cols = [\"fam\",\"transporter\",\"substrate_category\",\"name\"]\n # Sum to protein family\n x = df.groupby([\"fam\",\"transporter\",\"substrate_category\",\"name\"]).sum().reset_index()\n cols.pop(cols.index(\"fam\"))\n # Calculate mean of transporters\n x.groupby(cols).mean().reset_index()\n xt = x.copy()\n # Normalize to sum of all transporters\n x.iloc[:,4:] = x.iloc[:,4:].div(df_sum)*100\n # Sum percent to substrate category\n x = x.groupby(\"substrate_category\").sum()\n # Melt dataframe and add subset column\n x[\"substrate_category\"] = x.index\n xm = pd.melt(x,id_vars=\"substrate_category\", var_name=\"Sample\",value_name=\"%\")\n xm = xm.assign(Subset=pd.Series(data=subset,index=xm.index))\n return xm,xt",
"_____no_output_____"
],
[
"# Get contribution of bacterial transporters to total for substrate category\nmg_bac_cat_melt,mg_bac_cat = calculate_mean_total_substrate_subset(mg_bac_transcov_info,mg_trans.sum(),\"Bacteria\")\n# Get contribution of eukaryotic transporters to total for substrate category\nmg_euk_cat_melt,mg_euk_cat = calculate_mean_total_substrate_subset(mg_euk_transcov_info,mg_trans.sum(),\"Eukaryota\")\n# Get contribution of cyanobacterial transporters to total for substrate category\nmg_cya_cat_melt,mg_cya_cat = calculate_mean_total_substrate_subset(mg_cya_transcov_info,mg_trans.sum(),\"Cyanobacteria\")",
"_____no_output_____"
],
[
"# Get contribution of bacterial transporters to total for substrate category\nmt_bac_cat_melt,mt_bac_cat = calculate_mean_total_substrate_subset(mt_bac_transcov_info,mt_trans.sum(),\"Bacteria\")\n# Get contribution of eukaryotic transporters to total for substrate category\nmt_euk_cat_melt,mt_euk_cat = calculate_mean_total_substrate_subset(mt_euk_transcov_info,mt_trans.sum(),\"Eukaryota\")\n# Get contribution of cyanobacterial transporters to total for substrate category\nmt_cya_cat_melt,mt_cya_cat = calculate_mean_total_substrate_subset(mt_cya_transcov_info,mt_trans.sum(),\"Cyanobacteria\")",
"_____no_output_____"
],
[
"# Concatenate dataframes for metagenomes\nmg_subsets_cat = pd.concat([pd.concat([mg_bac_cat_melt,mg_euk_cat_melt]),mg_cya_cat_melt])\nmg_subsets_cat = mg_subsets_cat.assign(dataset=pd.Series(data=\"MG\",index=mg_subsets_cat.index))",
"_____no_output_____"
],
[
"# Concatenate dataframes for metagenomes\nmt_subsets_cat = pd.concat([pd.concat([mt_bac_cat_melt,mt_euk_cat_melt]),mt_cya_cat_melt])\nmt_subsets_cat = mt_subsets_cat.assign(dataset=pd.Series(data=\"MT\",index=mt_subsets_cat.index))",
"_____no_output_____"
]
],
[
[
"**Concatenate MG and MT**",
"_____no_output_____"
]
],
[
[
"subsets_cat = pd.concat([mg_subsets_cat,mt_subsets_cat])",
"_____no_output_____"
]
],
[
[
"### Plot substrate category distributions",
"_____no_output_____"
]
],
[
[
"cats = transinfo.substrate_category.unique()",
"_____no_output_____"
],
[
"# Update Eukaryota subset label\nsubsets_cat.loc[subsets_cat.Subset==\"Eukaryota\",\"Subset\"] = [\"Picoeukaryota\"]*len(subsets_cat.loc[subsets_cat.Subset==\"Eukaryota\",\"Subset\"])",
"_____no_output_____"
],
[
"sns.set(font_scale=0.8)\nax = sns.catplot(kind=\"bar\",data=subsets_cat.loc[subsets_cat.substrate_category.isin(cats)],hue=\"dataset\", \n y=\"substrate_category\", x=\"%\", col=\"Subset\",\n errwidth=1, height=3, palette=\"Set1\", aspect=1)\nax.set_titles(\"{col_name}\")\nax.set_axis_labels(\"% of normalized transporter abundance\",\"Substrate category\")\nplt.savefig(\"results/Figure3A.svg\", bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"_ = mg_transcov.groupby([\"fam\",\"transporter\"]).sum().reset_index()\n_ = _.groupby(\"transporter\").mean()\n_ = pd.merge(transinfo, _, left_index=True, right_index=True)\n_ = _.loc[_.substrate_category==\"Carbohydrate\"].groupby(\"name\").sum()\n(_.div(_.sum())*100).mean(axis=1).sort_values(ascending=False).head(3).sum()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb584b5e5653520a36002fe302a583cfd81ced40 | 11,305 | ipynb | Jupyter Notebook | tests_jupyter/special_weights.ipynb | theislab/AutoGeneS | 22bde0d5eba013e90edb85341e0bd9c28b82e7fd | [
"MIT"
] | 46 | 2020-02-25T14:09:21.000Z | 2022-01-20T16:42:40.000Z | tests_jupyter/special_weights.ipynb | theislab/AutoGeneS | 22bde0d5eba013e90edb85341e0bd9c28b82e7fd | [
"MIT"
] | 16 | 2020-03-18T15:08:42.000Z | 2022-01-29T20:00:10.000Z | tests_jupyter/special_weights.ipynb | theislab/AutoGeneS | 22bde0d5eba013e90edb85341e0bd9c28b82e7fd | [
"MIT"
] | 6 | 2020-02-13T14:23:46.000Z | 2021-12-28T16:50:50.000Z | 31.578212 | 956 | 0.499337 | [
[
[
"#import scanpy as sc\nimport anndata\nimport numpy as np\nimport pandas as pd\nimport importlib\n#import pickle\n\nimport sys\nsys.path.append(\"..\")\nimport autogenes",
"_____no_output_____"
],
[
"data = pd.read_csv('../datasets/GSE75748_bulk_data.csv',index_col='index')\ndata = data.T.iloc[:,:100].values\nag = autogenes.AutoGeneS(data)",
"_____no_output_____"
],
[
"ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,),objectives=('distance',))",
"gen\tnevals\tpareto\tdistance \n0 \t100 \t1 \t8.27 - 237.81\n1 \t100 \t1 \t68.86 - 237.81\n2 \t100 \t1 \t141.96 - 237.81\n3 \t100 \t1 \t159.75 - 237.81\n4 \t100 \t1 \t166.6 - 237.81 \n5 \t100 \t1 \t230.26 - 237.81\n6 \t100 \t1 \t233.67 - 237.81\n7 \t100 \t1 \t233.67 - 237.81\n8 \t100 \t1 \t237.81 - 237.81\n9 \t100 \t1 \t237.81 - 237.81\n10 \t100 \t1 \t237.81 - 237.81\n"
],
[
"ag.fitness_matrix",
"_____no_output_____"
],
[
"ag.plot(objectives=(0,0))",
"_____no_output_____"
],
[
"ag.run(ngen=10,offspring_size=100,seed=0,weights=(-1,),objectives=('correlation',))",
"gen\tnevals\tpareto\tcorrelation \n0 \t100 \t1 \t3.56 - 14.24\n1 \t100 \t1 \t3.56 - 8.08 \n2 \t100 \t1 \t3.56 - 6.18 \n3 \t100 \t1 \t3.56 - 5.2 \n4 \t100 \t1 \t3.56 - 4.4 \n5 \t100 \t1 \t3.56 - 4.23 \n6 \t100 \t1 \t3.56 - 4.0 \n7 \t100 \t1 \t3.56 - 4.0 \n8 \t100 \t1 \t3.56 - 3.56 \n9 \t100 \t1 \t3.56 - 3.56 \n10 \t100 \t1 \t3.56 - 3.56 \n"
],
[
"ag.fitness_matrix",
"_____no_output_____"
],
[
"ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,0),objectives=('distance','correlation'))",
"../autogenes/core.py:84: UserWarning: Ignoring objective 'correlation'\n warnings.warn(f\"Ignoring objective '{str(objectives[i])}'\")\n"
],
[
"ag.fitness_matrix",
"_____no_output_____"
],
[
"ag.pareto[0].fitness.wvalues",
"_____no_output_____"
],
[
"def num_genes(data): return data.shape[0]\nag.run(ngen=10,offspring_size=100,seed=0,weights=(1,-1,0),objectives=('distance',num_genes,'correlation'))",
"gen\tnevals\tpareto\tdistance \tnum_genes\n0 \t100 \t1 \t8.27 - 237.81\t6.0 - 6.0\n1 \t100 \t1 \t68.86 - 237.81\t6.0 - 6.0\n2 \t100 \t1 \t141.96 - 237.81\t6.0 - 6.0\n3 \t100 \t1 \t159.75 - 237.81\t6.0 - 6.0\n4 \t100 \t1 \t166.6 - 237.81 \t6.0 - 6.0\n5 \t100 \t1 \t230.26 - 237.81\t6.0 - 6.0\n6 \t100 \t1 \t233.67 - 237.81\t6.0 - 6.0\n7 \t100 \t1 \t233.67 - 237.81\t6.0 - 6.0\n8 \t100 \t1 \t237.81 - 237.81\t6.0 - 6.0\n9 \t100 \t1 \t237.81 - 237.81\t6.0 - 6.0\n10 \t100 \t1 \t237.81 - 237.81\t6.0 - 6.0\n"
],
[
"ag.select()",
"_____no_output_____"
],
[
"ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,-1,0.5),objectives=('distance',num_genes,'correlation'),verbose=False)",
"_____no_output_____"
],
[
"ag.select()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb584bb7e04795030d29dd34f73d1f20e771041f | 8,610 | ipynb | Jupyter Notebook | 04_download.ipynb | brockmanmatt/NewsTrends | 462255e3bdca39397c0c447a06819e48c4893908 | [
"Apache-2.0"
] | null | null | null | 04_download.ipynb | brockmanmatt/NewsTrends | 462255e3bdca39397c0c447a06819e48c4893908 | [
"Apache-2.0"
] | 2 | 2021-09-28T03:17:34.000Z | 2022-02-26T08:22:21.000Z | 04_download.ipynb | brockmanmatt/newstrends | 462255e3bdca39397c0c447a06819e48c4893908 | [
"Apache-2.0"
] | null | null | null | 25.625 | 322 | 0.51928 | [
[
[
"#instructions for how to build this using nbdev at https://nbdev.fast.ai/",
"_____no_output_____"
],
[
"#default_exp download",
"_____no_output_____"
]
],
[
[
"# Download and format datasets of news articles\n\n> Takes a loader as an argument",
"_____no_output_____"
],
[
"## Work in progress",
"_____no_output_____"
],
[
"## Datasets to get:\n-CoverageTrends\n\n-GDELT\n\n-Are there others that are archived?",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport gzip\nimport urllib.request\nimport os",
"_____no_output_____"
]
],
[
[
"# Download CoverageTrends articles\n\nCoverageTrends is a git repo I put together a few weeks ago to start playing with this. It creates daily CSVs for different publishers, updated every 30 minutes with front page coverage. The articles are in CSV form at ",
"_____no_output_____"
]
],
[
[
"from github import Github",
"_____no_output_____"
]
],
[
[
"# Download GDELT Headlines; not sure there's a memory efficient way to do this. Possibly doing some sort of integration with BigQuery?\nfrom https://blog.gdeltproject.org/announcing-gdelt-global-frontpage-graph-gfg/",
"_____no_output_____"
]
],
[
[
"## GDELT files are pretty large (250mb/hr zipped, 1.5gb unzipped)",
"_____no_output_____"
],
[
"#export \ndef downloadGDELT(filepath, year:\"YYYY\", month:\"MM\", day: \"DD\", hour:\"HH\"):\n \"download and extract .gz, borrowed from https://stackoverflow.com/questions/3548495/download-extract-and-read-a-gzip-file-in-python\"\n \n \"\"\"\n WARNING: This results in a 1.5gb file per hour, or 30gb/day. Not actually sure best way to go about making this useful.\n \"\"\"\n \n url = \"http://data.gdeltproject.org/gdeltv3/gfg/alpha/{}{}{}{}0000.LINKS.TXT.gz\".format(year, month, day, hour)\n os.makedirs(filepath, exist_ok=True)\n out_file = \"{}/{}\".format(filepath, url.split(\"/\")[-1][:-3])\n\n try:\n with urllib.request.urlopen(url) as response:\n with gzip.GzipFile(fileobj=response) as uncompressed:\n file_content = uncompressed.read()\n\n with open(out_file, 'wb') as f:\n f.write(file_content)\n return file_content\n\n\n except Exception as e:\n print(e)\n return -1\n",
"_____no_output_____"
],
[
"tmp = downloadGDELT(\"test\", \"2020\", \"06\", \"08\", \"00\")",
"_____no_output_____"
],
[
"os.listdir(\"test\")",
"_____no_output_____"
],
[
"test = pd.read_csv(\"test/20200608000000.LINKS.TXT\", sep='\\\\t', header=None)",
"/Library/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"test.fillna(\"\", inplace=True)",
"_____no_output_____"
],
[
"available = test[0].unique()",
"_____no_output_____"
],
[
"[x for x in available if x.find(\"nytimes\") > 0]",
"_____no_output_____"
],
[
"[x for x in available if x.find(\"wsj\") > 0]",
"_____no_output_____"
],
[
"[x for x in available if x.find(\"cnn\") > 0]",
"_____no_output_____"
],
[
"[x for x in available if x.find(\"dailybeast\") > 0]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5859b2435bd837f428ad9debe9c58159abde10 | 786,722 | ipynb | Jupyter Notebook | python/facenet.ipynb | deepuvsdeepu/tensorflow-101 | 8bbc8fe6fdc25126194a9bbb7ba9efd97f9584b5 | [
"MIT"
] | 832 | 2017-07-11T08:07:14.000Z | 2022-03-26T17:18:19.000Z | python/facenet.ipynb | baharmahmudlu/tensorflow-101 | 4181b551f2cb484ca436265b7498e025d1d49ae2 | [
"MIT"
] | 22 | 2018-04-20T09:30:04.000Z | 2021-11-25T13:14:33.000Z | python/facenet.ipynb | baharmahmudlu/tensorflow-101 | 4181b551f2cb484ca436265b7498e025d1d49ae2 | [
"MIT"
] | 653 | 2017-09-03T03:11:20.000Z | 2022-03-28T19:07:18.000Z | 2,260.695402 | 137,332 | 0.960813 | [
[
[
"import numpy as np\n\nfrom keras.models import Sequential\nfrom keras.models import load_model\nfrom keras.models import model_from_json\nfrom keras.layers.core import Dense, Activation\nfrom keras.utils import np_utils\n\nfrom keras.preprocessing.image import load_img, save_img, img_to_array\nfrom keras.applications.imagenet_utils import preprocess_input\n\nimport matplotlib.pyplot as plt\nfrom keras.preprocessing import image",
"Using TensorFlow backend.\n"
],
[
"#you can find the model at https://github.com/serengil/tensorflow-101/blob/master/model/facenet_model.json\nmodel = model_from_json(open(\"C:/Users/IS96273/Desktop/facenet_model.json\", \"r\").read())\n\n#you can find the pre-trained weights at https://drive.google.com/file/d/1971Xk5RwedbudGgTIrGAL4F7Aifu7id1/view?usp=sharing\nmodel.load_weights('C:/Users/IS96273/Desktop/facenet_weights.h5')\n\n#both model and pre-trained weights are inspired from the work of David Sandberg (github.com/davidsandberg/facenet)\n#and transformed by Sefik Serengil (sefiks.com)",
"_____no_output_____"
],
[
"#model.summary()",
"_____no_output_____"
],
[
"def preprocess_image(image_path):\n img = load_img(image_path, target_size=(160, 160))\n img = img_to_array(img)\n img = np.expand_dims(img, axis=0)\n img = preprocess_input(img)\n return img",
"_____no_output_____"
],
[
"def l2_normalize(x):\n return x / np.sqrt(np.sum(np.multiply(x, x)))\n\ndef findCosineSimilarity(source_representation, test_representation):\n a = np.matmul(np.transpose(source_representation), test_representation)\n b = np.sum(np.multiply(source_representation, source_representation))\n c = np.sum(np.multiply(test_representation, test_representation))\n return 1 - (a / (np.sqrt(b) * np.sqrt(c)))\n\ndef findEuclideanDistance(source_representation, test_representation):\n euclidean_distance = source_representation - test_representation\n euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))\n euclidean_distance = np.sqrt(euclidean_distance)\n return euclidean_distance",
"_____no_output_____"
],
[
"metric = \"euclidean\" #euclidean or cosine\n\nthreshold = 0\nif metric == \"euclidean\":\n threshold = 0.35\nelif metric == \"cosine\":\n threshold = 0.07\n\ndef verifyFace(img1, img2):\n #produce 128-dimensional representation\n img1_representation = model.predict(preprocess_image('C:/Users/IS96273/Desktop/trainset/%s' % (img1)))[0,:]\n img2_representation = model.predict(preprocess_image('C:/Users/IS96273/Desktop/trainset/%s' % (img2)))[0,:]\n \n if metric == \"euclidean\":\n img1_representation = l2_normalize(img1_representation)\n img2_representation = l2_normalize(img2_representation)\n\n euclidean_distance = findEuclideanDistance(img1_representation, img2_representation)\n print(\"euclidean distance (l2 norm): \",euclidean_distance)\n\n if euclidean_distance < threshold:\n print(\"verified... they are same person\")\n else:\n print(\"unverified! they are not same person!\")\n \n elif metric == \"cosine\":\n cosine_similarity = findCosineSimilarity(img1_representation, img2_representation)\n print(\"cosine similarity: \",cosine_similarity)\n\n if cosine_similarity < 0.07:\n print(\"verified... they are same person\")\n else:\n print(\"unverified! they are not same person!\")\n \n f = plt.figure()\n f.add_subplot(1,2, 1)\n plt.imshow(image.load_img('C:/Users/IS96273/Desktop/trainset/%s' % (img1)))\n plt.xticks([]); plt.yticks([])\n f.add_subplot(1,2, 2)\n plt.imshow(image.load_img('C:/Users/IS96273/Desktop/trainset/%s' % (img2)))\n plt.xticks([]); plt.yticks([])\n plt.show(block=True)\n print(\"-----------------------------------------\")",
"_____no_output_____"
],
[
"#true positive\nverifyFace(\"1.jpg\", \"5.jpg\")\nverifyFace(\"1.jpg\", \"7.jpg\")",
"euclidean distance (l2 norm): 0.1944712\nverified... they are same person\n"
],
[
"#true negative\nverifyFace(\"1.jpg\", \"8.jpg\")\nverifyFace(\"1.jpg\", \"10.jpg\")",
"euclidean distance (l2 norm): 0.4257992\nunverified! they are not same person!\n"
],
[
"#true positive\nverifyFace(\"17.jpg\", \"8.jpg\")\nverifyFace(\"17.jpg\", \"9.jpg\")",
"euclidean distance (l2 norm): 0.32390624\nverified... they are same person\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb585e63212bd10573cbb38089d6f39dee21985a | 42,860 | ipynb | Jupyter Notebook | notebooks/pawel_ueb01/.ipynb_checkpoints/03_Cross_validation_and_grid_search_bielski-checkpoint.ipynb | hhain/sdap17 | 8bd0b4cb60d6140141c834ffcac8835a888a0949 | [
"MIT"
] | null | null | null | notebooks/pawel_ueb01/.ipynb_checkpoints/03_Cross_validation_and_grid_search_bielski-checkpoint.ipynb | hhain/sdap17 | 8bd0b4cb60d6140141c834ffcac8835a888a0949 | [
"MIT"
] | 1 | 2017-06-08T22:32:48.000Z | 2017-06-08T22:32:48.000Z | notebooks/pawel_ueb01/.ipynb_checkpoints/03_Cross_validation_and_grid_search_bielski-checkpoint.ipynb | hhain/sdap17 | 8bd0b4cb60d6140141c834ffcac8835a888a0949 | [
"MIT"
] | null | null | null | 145.782313 | 18,312 | 0.883714 | [
[
[
"# Load libraries\nimport pandas as pd\nimport numpy as np\nfrom pandas.tools.plotting import scatter_matrix\nimport matplotlib.pyplot as plt\nimport time\n\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"# Load dataset\nurl = \"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data\"\nnames = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']\ndataset = pd.read_csv(url, names=names)",
"_____no_output_____"
],
[
"print(dataset.shape)",
"(150, 5)\n"
],
[
"print(dataset.head(5))",
" sepal-length sepal-width petal-length petal-width class\n0 5.1 3.5 1.4 0.2 Iris-setosa\n1 4.9 3.0 1.4 0.2 Iris-setosa\n2 4.7 3.2 1.3 0.2 Iris-setosa\n3 4.6 3.1 1.5 0.2 Iris-setosa\n4 5.0 3.6 1.4 0.2 Iris-setosa\n"
],
[
"# Split-out validation dataset\narray = dataset.values\nX = array[:,0:4]\ny = array[:,4]\nvalidation_size = 0.20\nseed = 7\nX_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=validation_size, random_state=seed)",
"_____no_output_____"
],
[
"# Test options and evaluation metric\nseed = 7\nscoring = 'accuracy'",
"_____no_output_____"
],
[
"# test different number of cores: max 8\nnum_cpu_list = list(range(1,9))\ntraining_times_all = []",
"_____no_output_____"
],
[
"param_grid = {\"n_neighbors\" : list(range(1,10))}\ntraining_times = []\n\nfor num_cpu in num_cpu_list:\n clf = GridSearchCV(KNeighborsClassifier(), param_grid, scoring=scoring)\n clf.set_params(n_jobs=num_cpu)\n start_time = time.time()\n clf.fit(X_train, y_train)\n training_times.append(time.time() - start_time)\n # print logging message\n print(\"Computing KNN grid with {} cores DONE.\".format(num_cpu))\n\nprint(\"All computations DONE.\")",
"Computing KNN grid for 1 cores DONE.\nComputing KNN grid for 2 cores DONE.\nComputing KNN grid for 3 cores DONE.\nComputing KNN grid for 4 cores DONE.\nComputing KNN grid for 5 cores DONE.\nComputing KNN grid for 6 cores DONE.\nComputing KNN grid for 7 cores DONE.\nComputing KNN grid for 8 cores DONE.\nAll computations DONE.\n"
],
[
"# best parameters found\nprint(\"Best parameters:\")\nprint(clf.best_params_)\nprint(\"With accuracy:\")\nprint(clf.best_score_)\n",
"Best parameters:\n{'n_neighbors': 7}\nWith accuracy:\n0.991666666667\n"
],
[
"scores_all_percent = [100 * grid_score[1] for grid_score in clf.grid_scores_]\nparams_all = [grid_score[0][\"n_neighbors\"] for grid_score in clf.grid_scores_]\n\nN = 9\nind = np.arange(N) # the x locations for bars\nwidth = 0.5 # the width of the bars\n\nfig, ax = plt.subplots()\nax.bar(ind + width/2, scores_all_percent, width)\nax.set_xticks(ind + width)\nax.set_xticklabels([str(i) for i in params_all])\nax.set_ylim([90,100])\nplt.title(\"Accuracy of KNN vs n_neighbors param\")\nplt.xlabel(\"n_neighbors\")\nplt.ylabel(\"accuracy [%]\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"The above plot shows that the best accuracy for KNN algorithm is obtained for **n_neighbors = 7**",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nax.plot(num_cpu_list, training_times, 'ro')\nax.set_xlim([0, len(num_cpu_list)+1])\n\n#plt.axis([0, len(num_cpu_list)+1, 0, max(training_times)+1])\nplt.title(\"Search time vs #CPU Cores\")\nplt.xlabel(\"#CPU Cores\")\nplt.ylabel(\"search time [s]\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"\nWe can see that the search time for **n_jobs > 1** is highier than for **n_jobs = 1**. The reason is that multiprocessing comes at cost i.e. the distribution of multiple processes can take more time that the actual execution time for the small datasets like **Iris** (150 rows).",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb58654c301470f4d59f1cb03909b6c6d710b753 | 636,204 | ipynb | Jupyter Notebook | code/yinyang/kats_experiments/kats_detect_outliers.ipynb | lady-h-world/My_Garden | 3b2bea500d79210a7166be3e3768033b403bf5d5 | [
"MIT"
] | null | null | null | code/yinyang/kats_experiments/kats_detect_outliers.ipynb | lady-h-world/My_Garden | 3b2bea500d79210a7166be3e3768033b403bf5d5 | [
"MIT"
] | null | null | null | code/yinyang/kats_experiments/kats_detect_outliers.ipynb | lady-h-world/My_Garden | 3b2bea500d79210a7166be3e3768033b403bf5d5 | [
"MIT"
] | 1 | 2021-11-29T17:04:52.000Z | 2021-11-29T17:04:52.000Z | 376.674956 | 234,576 | 0.916923 | [
[
[
"Licensed under the MIT License.\n\nCopyright (c) 2021-2031. All rights reserved.\n\n# Kats Outliers Detection\n\n* Kats General\n * `TimeSeriesData` params and methods: https://facebookresearch.github.io/Kats/api/kats.consts.html#kats.consts.TimeSeriesData\n* Kats Detection\n * Kats detection official tutorial: https://github.com/facebookresearch/Kats/blob/main/tutorials/kats_202_detection.ipynb\n * It describes Kats' outlier detector's algorithms\n * But Kats' multivariate anomaly detection only output strange errors to me, even using the same tutorial code, see this ticket: https://github.com/facebookresearch/Kats/issues/194\n* Other Kats Outlier Detectors\n * https://facebookresearch.github.io/Kats/api/kats.detectors.prophet_detector.html\n * Kats v0.1 requires prophet version to be \"0.7\" exactly, other will get errors, but my laptop could only install higher version prophet...\n * https://facebookresearch.github.io/Kats/api/kats.detectors.hourly_ratio_detection.html\n * It requires the time series to be hour-level granularity",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom statsmodels.tsa.seasonal import seasonal_decompose\nfrom statsmodels.tsa.stattools import kpss\nfrom statsmodels.tsa.stattools import adfuller\nfrom statsmodels.stats.stattools import durbin_watson\nfrom statsmodels.tsa.api import VAR\nfrom statsmodels.tsa.vector_ar.vecm import VECM\n\nfrom kats.consts import TimeSeriesData\nfrom kats.detectors.outlier import OutlierDetector\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
],
[
"ts_df = pd.read_pickle('../../crystal_ball/data_collector/structured_data/sales_ts.pkl')\nprint(ts_df.shape)\n\nts_df_train = ts_df.iloc[ts_df.index < '2015-03-01']\nprint(ts_df_train.shape)\n\nts_df.head()",
"(942, 1)\n(789, 1)\n"
],
[
"def plot_ts(ts, title):\n plt.figure(figsize=(20,3))\n for col in ts.columns:\n fig = plt.plot(ts[col], label=col)\n plt.title(title)\n plt.legend(loc='best')\n plt.tight_layout()\n plt.show()\n \n\ndef plot_ts_outliers(ts, title, outliers, decomp='additive'):\n outliers_x = [str(outlier).split()[0] for outlier in outliers[0]]\n outliers_y = ts.iloc[ts.index.isin(outliers_x)]\n \n plt.figure(figsize=(20,10))\n plt.subplot(411)\n fig = plt.plot(ts, label='original ts', color='blue')\n plt.scatter(outliers_x, outliers_y, c='red', marker='*')\n plt.legend(loc='best')\n \n plt.subplot(412)\n decomposition = seasonal_decompose(ts, model=decomp)\n residual = decomposition.resid\n fig = plt.plot(residual, label='residuals', color='purple')\n outliers_y_res = residual.iloc[residual.index.isin(outliers_x)]\n plt.scatter(outliers_x, outliers_y_res, c='red', marker='*')\n plt.legend(loc='best')\n \n plt.title(title)\n plt.tight_layout()\n plt.show()",
"_____no_output_____"
],
[
"plot_ts(ts_df_train, title='Univariate training ts plot')",
"_____no_output_____"
],
[
"# Covert to Kats required TimeSeriesData input\n\nkats_ts_all = TimeSeriesData(ts_df_train.reset_index().rename(index=str, columns={'Date': 'time'}))\nprint(len(kats_ts_all))",
"789\n"
]
],
[
[
"## Univariate OutlierDetector\n\n* Kats' outlier detector: https://facebookresearch.github.io/Kats/api/kats.detectors.outlier.html",
"_____no_output_____"
]
],
[
[
"# detect & plot outliers\n\nts_outlierDetection = OutlierDetector(kats_ts_all, 'multiplication', iqr_mult=5)\nts_outlierDetection.detector()\n\nplot_ts_outliers(ts_df_train, title='Outliers in all ts train', outliers=ts_outlierDetection.outliers, decomp='multipllicative')",
"_____no_output_____"
],
[
"# remove and plot outliers\n\nts_outlierDetection_outliers_removed = ts_outlierDetection.remover(interpolate = False) # No interpolation\nts_outlierDetection_interpolated = ts_outlierDetection.remover(interpolate = True) # With linear interpolation\n\nts_outlierDetection_outliers_removed",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(25,8), nrows=1, ncols=2)\nts_outlierDetection_outliers_removed.to_dataframe().plot(x='time',y = 'y_0', ax = ax[0])\nax[0].set_title(\"Outliers Removed : No interpolation\")\nts_outlierDetection_interpolated.to_dataframe().plot(x = 'time',y = 'y_0', ax = ax[1])\nax[1].set_title(\"Outliers Removed : With interpolation\")\nplt.show()",
"_____no_output_____"
],
[
"sub_original_df = ts_df_train.iloc[(ts_df_train.index>='2013-12-22') & (ts_df_train.index<='2014-01-02')]\nsub_df_removed = ts_outlierDetection_outliers_removed.to_dataframe()\nsub_df_removed = sub_df_removed.loc[(sub_df_removed['time']>='2013-12-22') & (sub_df_removed['time']<='2014-01-02')]\n\nsub_df_interpolated = ts_outlierDetection_interpolated.to_dataframe()\nsub_df_interpolated = sub_df_interpolated.loc[(sub_df_interpolated['time']>='2013-12-22') & (sub_df_interpolated['time']<='2014-01-02')]\n\nfig, ax = plt.subplots(figsize=(25,8), nrows=1, ncols=2)\nsub_original_df.reset_index().plot(x='Date', y='Daily_Sales', ax=ax[0], color='orange', marker='o', label='original ts')\nsub_df_removed.plot(x='time', y='y_0', ax= ax[0], color='green', label='outlier removed ts')\nax[0].set_title(\"Outliers Removed Subset: No interpolation\")\n\nsub_original_df.reset_index().plot(x='Date', y='Daily_Sales', ax=ax[1], color='orange', marker='o', label='original ts')\nsub_df_interpolated.plot(x = 'time',y = 'y_0', ax= ax[1], color='green', label='outlier interpolated ts')\nax[1].set_title(\"Outliers Removed Subset: With interpolation\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Multivariate Anomaly Detection\n\n* References\n * VAR for anomaly detection: https://www.analyticsvidhya.com/blog/2021/08/multivariate-time-series-anomaly-detection-using-var-model/\n * More about VAR: https://www.machinelearningplus.com/time-series/vector-autoregression-examples-python/\n* More multivariate time series models: https://www.statsmodels.org/dev/api.html#multivariate-time-series-models",
"_____no_output_____"
]
],
[
[
"mul_ts_df = pd.read_pickle('../../crystal_ball/data_collector/structured_data/multivar_ts.pkl')\n\nprint(mul_ts_df.shape)\nmul_ts_df.head()",
"(8143, 6)\n"
],
[
"occupancy = mul_ts_df[['Occupancy']]\nmul_ts_df.drop('Occupancy', inplace=True, axis=1)\nprint(mul_ts_df.shape)",
"(8143, 5)\n"
]
],
[
[
"### Convert Data to Stationary",
"_____no_output_____"
]
],
[
[
"def test_stationarity_multi_ts(multi_ts_df):\n results_dct = {}\n \n for col in multi_ts_df.columns:\n timeseries = multi_ts_df[col]\n \n adf_result, kpss_result = None, None\n results_dct[col] = {'Differencing Stationary': None, 'Trending Stationary': None}\n \n # Perform Augmented Dickey-Fuller test:\n adftest = adfuller(timeseries, autolag='AIC')\n adf_output = pd.Series(adftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])\n adf_test_stats = adf_output['Test Statistic']\n for key,value in adftest[4].items():\n adf_output[f'Critical Value {key}'] = value\n \n if abs(adf_test_stats) >= abs(adf_output[f'Critical Value 1%']):\n adf_result = '99%'\n elif abs(adf_test_stats) >= abs(adf_output[f'Critical Value 5%']) and abs(adf_test_stats) < abs(adf_output[f'Critical Value 1%']):\n adf_result = '95%'\n elif abs(adf_test_stats) >= abs(adf_output[f'Critical Value 10%']) and abs(adf_test_stats) < abs(adf_output[f'Critical Value 5%']):\n adf_result = '90%'\n\n \n # Perform KPSS\n kpsstest = kpss(timeseries, regression='c')\n kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])\n kpss_test_stats = kpss_output['Test Statistic']\n for key,value in kpsstest[3].items():\n kpss_output[f'Critical Value {key}'] = value\n\n if abs(kpss_test_stats) >= abs(kpss_output['Critical Value 1%']):\n kpss_result = '99%'\n elif abs(kpss_test_stats) >= abs(kpss_output['Critical Value 2.5%']) and abs(kpss_test_stats) < abs(kpss_output[f'Critical Value 1%']):\n kpss_result = '97.5%'\n elif abs(kpss_test_stats) >= abs(kpss_output['Critical Value 5%']) and abs(kpss_test_stats) < abs(kpss_output[f'Critical Value 2.5%']):\n kpss_result = '95%'\n elif abs(kpss_test_stats) >= abs(kpss_output['Critical Value 10%']) and abs(kpss_test_stats) < abs(kpss_output[f'Critical Value 5%']):\n kpss_result = '90%'\n \n results_dct[col]['Differencing Stationary'] = adf_result\n results_dct[col]['Trending Stationary'] = kpss_result\n \n return results_dct\n\n\ndef detect_anomalies(squared_errors, n=1):\n threshold = np.mean(squared_errors) + n*np.std(squared_errors)\n detections = (squared_errors >= threshold).astype(int)\n \n return threshold, detections",
"_____no_output_____"
],
[
"mul_ts_df['Humidity'] = mul_ts_df['Humidity'].diff()\nmul_ts_df['HumidityRatio'] = mul_ts_df['HumidityRatio'].diff()\nmul_ts_df = mul_ts_df.dropna()\nprint(mul_ts_df.shape)\n\nmulti_ts_stationary = test_stationarity_multi_ts(mul_ts_df)\nfor k, v in multi_ts_stationary.items():\n print(k)\n print(v)\n print()",
"(8142, 5)\nTemperature\n{'Differencing Stationary': '90%', 'Trending Stationary': '99%'}\n\nHumidity\n{'Differencing Stationary': '99%', 'Trending Stationary': '97.5%'}\n\nLight\n{'Differencing Stationary': '95%', 'Trending Stationary': '99%'}\n\nCO2\n{'Differencing Stationary': '99%', 'Trending Stationary': '99%'}\n\nHumidityRatio\n{'Differencing Stationary': '99%', 'Trending Stationary': '97.5%'}\n\n"
]
],
[
[
"### VAR to Detect Anomalies\n\n* The way it detects anomalies is to find observations with residuals above a threshold",
"_____no_output_____"
]
],
[
[
"# select better model order\nmax_lag = 20\nvar_model = VAR(mul_ts_df)\nlag_results = var_model.select_order(max_lag)\nselected_lag = lag_results.aic\nprint(f'Selected VAR order is {selected_lag}')\n\nlag_results.summary()",
"Selected VAR order is 18\n"
],
[
"model_fitted = var_model.fit(selected_lag)\n\n# durbin_watson test to check whether there is any leftover pattern in the residuals, closer to 2, the better\ndw_scores = durbin_watson(model_fitted.resid)\n\nfor col, dw in zip(mul_ts_df.columns, dw_scores):\n print(f'{col}: {dw}')",
"Temperature: 1.9999348527449858\nHumidity: 2.0005990988062603\nLight: 2.001429820977524\nCO2: 2.0001469651272985\nHumidityRatio: 2.0004844080080337\n"
],
[
"model_fitted.resid",
"_____no_output_____"
],
[
"squared_errors = model_fitted.resid.sum(axis=1)**2\n\nthreshold, detections = detect_anomalies(squared_errors, n=1)\ndetected_mul_ts_df = mul_ts_df.copy()\ndetected_mul_ts_df['anomaly_detection'] = detections\ndetected_mul_ts_df['Occupancy'] = occupancy\ndetected_mul_ts_df = detected_mul_ts_df.iloc[selected_lag:, :]\nprint(f'Threshold: {threshold}')\n\ndetected_mul_ts_df.head()",
"Threshold: 11333.96059868146\n"
],
[
"detected_mul_ts_df.loc[detected_mul_ts_df['anomaly_detection']==1].head()",
"_____no_output_____"
],
[
"# Check whether there's any anomaly pattern in different occupancy\nno_occpupancy_df = detected_mul_ts_df.loc[detected_mul_ts_df['Occupancy']==0]\nhas_occpupancy_df = detected_mul_ts_df.loc[detected_mul_ts_df['Occupancy']==1]\n\nprint(no_occpupancy_df['anomaly_detection'].value_counts()/len(no_occpupancy_df))\nprint()\nprint(has_occpupancy_df['anomaly_detection'].value_counts()/len(has_occpupancy_df))",
"0.0 0.995321\n1.0 0.004679\nName: anomaly_detection, dtype: float64\n\n0.0 0.991243\n1.0 0.008757\nName: anomaly_detection, dtype: float64\n"
]
],
[
[
"### VECM to Detect Anomalies\n\n* About VECM: https://www.statsmodels.org/dev/generated/statsmodels.tsa.vector_ar.vecm.VECM.html#statsmodels.tsa.vector_ar.vecm.VECM",
"_____no_output_____"
]
],
[
[
"k_ar_diff = 18\nvecm_model = VECM(mul_ts_df, k_ar_diff=k_ar_diff)\nvecm_model_fitted = vecm_model.fit()",
"_____no_output_____"
],
[
"vecm_dw_scores = durbin_watson(vecm_model_fitted.resid)\n\nfor col, dw in zip(mul_ts_df.columns, vecm_dw_scores):\n print(f'{col}: {dw}')",
"Temperature: 2.0003353804623627\nHumidity: 2.0001296254440404\nLight: 2.0001636903225966\nCO2: 2.0004122200313583\nHumidityRatio: 2.0004128612279386\n"
],
[
"vecm_squared_errors = vecm_model_fitted.resid.sum(axis=1)**2\n\nvecm_threshold, vecm_detections = detect_anomalies(vecm_squared_errors, n=1)\nvecm_detected_mul_ts_df = mul_ts_df.iloc[k_ar_diff+1:, :]\nvecm_detected_mul_ts_df['anomaly_detection'] = vecm_detections\n\nprint(f'Threshold: {threshold}')\n\nvecm_detected_mul_ts_df.head()",
"Threshold: 11333.96059868146\n"
],
[
"compare_df = pd.merge(vecm_detected_mul_ts_df[['anomaly_detection']], detected_mul_ts_df[['anomaly_detection']], left_index=True, right_index=True)\nprint(len(compare_df))\n\ncompare_df.head()",
"8123\n"
],
[
"compare_df.loc[(compare_df['anomaly_detection_x'] != compare_df['anomaly_detection_y'])]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb58676747fed5c7e9c471a7604707cf561ddf7f | 3,534 | ipynb | Jupyter Notebook | pipes.ipynb | choyrim/pipes-for-pyspark-hack | 3cd39b70bdafb227be4934dec995f86915bdcae3 | [
"Apache-2.0"
] | null | null | null | pipes.ipynb | choyrim/pipes-for-pyspark-hack | 3cd39b70bdafb227be4934dec995f86915bdcae3 | [
"Apache-2.0"
] | null | null | null | pipes.ipynb | choyrim/pipes-for-pyspark-hack | 3cd39b70bdafb227be4934dec995f86915bdcae3 | [
"Apache-2.0"
] | null | null | null | 18.6 | 68 | 0.472835 | [
[
[
"# Start spark session\n\nSpark UI available at http://127.0.0.1:4040/",
"_____no_output_____"
]
],
[
[
"from pyspark.sql import SparkSession\n\nspark = SparkSession.builder.master('local').getOrCreate()",
"_____no_output_____"
]
],
[
[
"# load module",
"_____no_output_____"
]
],
[
[
"import sys\n\n# put the source code in the python path\nsys.path.insert(0, \"/home/jovyan/work/src\")",
"_____no_output_____"
],
[
"# load the pipes module\nfrom pyspark_pipes import *",
"_____no_output_____"
]
],
[
[
"# sample data",
"_____no_output_____"
]
],
[
[
"# load sample data\ndf = spark.read.csv(\n \"tests/sample-data.csv\",\n header=True,\n inferSchema=True,\n)",
"_____no_output_____"
]
],
[
[
"# try some queries",
"_____no_output_____"
]
],
[
[
"# basic row count\ndf | count",
"_____no_output_____"
],
[
"# show a few rows\ndf | show(3)",
"_____no_output_____"
],
[
"# projection\n(df\n | select(\n \"`Order ID`\",\n \"`Units Sold` * `Unit Cost` SoldxCost\",\n \"`Total Cost`\",\n )\n | show(3)\n)",
"_____no_output_____"
],
[
"# filter\n(df\n | where(\"`Item Type` = 'Cosmetics'\")\n | show(3)\n)",
"_____no_output_____"
],
[
"# example aggregation\n(df\n | group_by(\"Order Priority\")\n | agg(\"count(1) n_rows\")\n | order_by(\"n_rows\", ascending=0)\n | show(10)\n)",
"_____no_output_____"
],
[
"# compose\ndef only_item_type(item_type):\n def filter(df):\n return df | where(f\"`Item Type` = '{item_type}'\")\n return filter\n\n(df | only_item_type(\"Cosmetics\") | show(3))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5868df702c2c2e1e37ee05ce5ecebe938e4d60 | 22,061 | ipynb | Jupyter Notebook | colab/07_reading_tabular.ipynb | mfernandes61/python-intro-gapminder | 894579dc093b03cdc211e095a5aaf7e401b525bf | [
"CC-BY-4.0"
] | null | null | null | colab/07_reading_tabular.ipynb | mfernandes61/python-intro-gapminder | 894579dc093b03cdc211e095a5aaf7e401b525bf | [
"CC-BY-4.0"
] | null | null | null | colab/07_reading_tabular.ipynb | mfernandes61/python-intro-gapminder | 894579dc093b03cdc211e095a5aaf7e401b525bf | [
"CC-BY-4.0"
] | null | null | null | 48.592511 | 258 | 0.515434 | [
[
[
"<a href=\"https://colab.research.google.com/github/mfernandes61/python-intro-gapminder/blob/binder/colab/07_reading_tabular.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"---\ntitle: \"Reading Tabular Data into DataFrames\"\nteaching: 10\nexercises: 10\nquestions:\n- \"How can I read tabular data?\"\nobjectives:\n- \"Import the Pandas library.\"\n- \"Use Pandas to load a simple CSV data set.\"\n- \"Get some basic information about a Pandas DataFrame.\"\nkeypoints:\n- \"Use the Pandas library to get basic statistics out of tabular data.\"\n- \"Use `index_col` to specify that a column's values should be used as row headings.\"\n- \"Use `DataFrame.info` to find out more about a dataframe.\"\n- \"The `DataFrame.columns` variable stores information about the dataframe's columns.\"\n- \"Use `DataFrame.T` to transpose a dataframe.\"\n- \"Use `DataFrame.describe` to get summary statistics about data.\"\n---\n## Use the Pandas library to do statistics on tabular data.\n\n* Pandas is a widely-used Python library for statistics, particularly on tabular data.\n* Borrows many features from R's dataframes.\n * A 2-dimensional table whose columns have names\n and potentially have different data types.\n* Load it with `import pandas as pd`. The alias pd is commonly used for Pandas.\n* Read a Comma Separated Values (CSV) data file with `pd.read_csv`.\n * Argument is the name of the file to be read.\n * Assign result to a variable to store the data that was read.\n\n~~~\nimport pandas as pd\n\ndata = pd.read_csv('data/gapminder_gdp_oceania.csv')\nprint(data)\n~~~\n{: .language-python}\n~~~\n country gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 \\\n0 Australia 10039.59564 10949.64959 12217.22686\n1 New Zealand 10556.57566 12247.39532 13175.67800\n\n gdpPercap_1967 gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 \\\n0 14526.12465 16788.62948 18334.19751 19477.00928\n1 14463.91893 16046.03728 16233.71770 17632.41040\n\n gdpPercap_1987 gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 \\\n0 21888.88903 23424.76683 26997.93657 30687.75473\n1 19007.19129 18363.32494 21050.41377 23189.80135\n\n gdpPercap_2007\n0 34435.36744\n1 25185.00911\n~~~\n{: .output}\n\n* The columns in a dataframe are the observed variables, and the rows are the observations.\n* Pandas uses backslash `\\` to show wrapped lines when output is too wide to fit the screen.\n\n> ## File Not Found\n>\n> Our lessons store their data files in a `data` sub-directory,\n> which is why the path to the file is `data/gapminder_gdp_oceania.csv`.\n> If you forget to include `data/`,\n> or if you include it but your copy of the file is somewhere else,\n> you will get a [runtime error]({{ page.root }}/04-built-in/#runtime-error)\n> that ends with a line like this:\n>\n> ~~~\n> FileNotFoundError: [Errno 2] No such file or directory: 'data/gapminder_gdp_oceania.csv'\n> ~~~\n> {: .error}\n{: .callout}\n\n## Use `index_col` to specify that a column's values should be used as row headings.\n\n* Row headings are numbers (0 and 1 in this case).\n* Really want to index by country.\n* Pass the name of the column to `read_csv` as its `index_col` parameter to do this.\n\n~~~\ndata = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country')\nprint(data)\n~~~\n{: .language-python}\n~~~\n gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 gdpPercap_1967 \\\ncountry\nAustralia 10039.59564 10949.64959 12217.22686 14526.12465\nNew Zealand 10556.57566 12247.39532 13175.67800 14463.91893\n\n gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 gdpPercap_1987 \\\ncountry\nAustralia 16788.62948 18334.19751 19477.00928 21888.88903\nNew Zealand 16046.03728 16233.71770 17632.41040 19007.19129\n\n gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 gdpPercap_2007\ncountry\nAustralia 23424.76683 26997.93657 30687.75473 34435.36744\nNew Zealand 18363.32494 21050.41377 23189.80135 25185.00911\n~~~\n{: .output}\n\n## Use the `DataFrame.info()` method to find out more about a dataframe.\n\n~~~\ndata.info()\n~~~\n{: .language-python}\n~~~\n<class 'pandas.core.frame.DataFrame'>\nIndex: 2 entries, Australia to New Zealand\nData columns (total 12 columns):\ngdpPercap_1952 2 non-null float64\ngdpPercap_1957 2 non-null float64\ngdpPercap_1962 2 non-null float64\ngdpPercap_1967 2 non-null float64\ngdpPercap_1972 2 non-null float64\ngdpPercap_1977 2 non-null float64\ngdpPercap_1982 2 non-null float64\ngdpPercap_1987 2 non-null float64\ngdpPercap_1992 2 non-null float64\ngdpPercap_1997 2 non-null float64\ngdpPercap_2002 2 non-null float64\ngdpPercap_2007 2 non-null float64\ndtypes: float64(12)\nmemory usage: 208.0+ bytes\n~~~\n{: .output}\n\n* This is a `DataFrame`\n* Two rows named `'Australia'` and `'New Zealand'`\n* Twelve columns, each of which has two actual 64-bit floating point values.\n * We will talk later about null values, which are used to represent missing observations.\n* Uses 208 bytes of memory.\n\n## The `DataFrame.columns` variable stores information about the dataframe's columns.\n\n* Note that this is data, *not* a method. (It doesn't have parentheses.)\n * Like `math.pi`.\n * So do not use `()` to try to call it.\n* Called a *member variable*, or just *member*.\n\n~~~\nprint(data.columns)\n~~~\n{: .language-python}\n~~~\nIndex(['gdpPercap_1952', 'gdpPercap_1957', 'gdpPercap_1962', 'gdpPercap_1967',\n 'gdpPercap_1972', 'gdpPercap_1977', 'gdpPercap_1982', 'gdpPercap_1987',\n 'gdpPercap_1992', 'gdpPercap_1997', 'gdpPercap_2002', 'gdpPercap_2007'],\n dtype='object')\n~~~\n{: .output}\n\n## Use `DataFrame.T` to transpose a dataframe.\n\n* Sometimes want to treat columns as rows and vice versa.\n* Transpose (written `.T`) doesn't copy the data, just changes the program's view of it.\n* Like `columns`, it is a member variable.\n\n~~~\nprint(data.T)\n~~~\n{: .language-python}\n~~~\ncountry Australia New Zealand\ngdpPercap_1952 10039.59564 10556.57566\ngdpPercap_1957 10949.64959 12247.39532\ngdpPercap_1962 12217.22686 13175.67800\ngdpPercap_1967 14526.12465 14463.91893\ngdpPercap_1972 16788.62948 16046.03728\ngdpPercap_1977 18334.19751 16233.71770\ngdpPercap_1982 19477.00928 17632.41040\ngdpPercap_1987 21888.88903 19007.19129\ngdpPercap_1992 23424.76683 18363.32494\ngdpPercap_1997 26997.93657 21050.41377\ngdpPercap_2002 30687.75473 23189.80135\ngdpPercap_2007 34435.36744 25185.00911\n~~~\n{: .output}\n\n## Use `DataFrame.describe()` to get summary statistics about data.\n\n`DataFrame.describe()` gets the summary statistics of only the columns that have numerical data. \nAll other columns are ignored, unless you use the argument `include='all'`.\n~~~\nprint(data.describe())\n~~~\n{: .language-python}\n~~~\n gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 gdpPercap_1967 \\\ncount 2.000000 2.000000 2.000000 2.000000\nmean 10298.085650 11598.522455 12696.452430 14495.021790\nstd 365.560078 917.644806 677.727301 43.986086\nmin 10039.595640 10949.649590 12217.226860 14463.918930\n25% 10168.840645 11274.086022 12456.839645 14479.470360\n50% 10298.085650 11598.522455 12696.452430 14495.021790\n75% 10427.330655 11922.958888 12936.065215 14510.573220\nmax 10556.575660 12247.395320 13175.678000 14526.124650\n\n gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 gdpPercap_1987 \\\ncount 2.00000 2.000000 2.000000 2.000000\nmean 16417.33338 17283.957605 18554.709840 20448.040160\nstd 525.09198 1485.263517 1304.328377 2037.668013\nmin 16046.03728 16233.717700 17632.410400 19007.191290\n25% 16231.68533 16758.837652 18093.560120 19727.615725\n50% 16417.33338 17283.957605 18554.709840 20448.040160\n75% 16602.98143 17809.077557 19015.859560 21168.464595\nmax 16788.62948 18334.197510 19477.009280 21888.889030\n\n gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 gdpPercap_2007\ncount 2.000000 2.000000 2.000000 2.000000\nmean 20894.045885 24024.175170 26938.778040 29810.188275\nstd 3578.979883 4205.533703 5301.853680 6540.991104\nmin 18363.324940 21050.413770 23189.801350 25185.009110\n25% 19628.685413 22537.294470 25064.289695 27497.598692\n50% 20894.045885 24024.175170 26938.778040 29810.188275\n75% 22159.406358 25511.055870 28813.266385 32122.777857\nmax 23424.766830 26997.936570 30687.754730 34435.367440\n~~~\n{: .output}\n\n* Not particularly useful with just two records,\n but very helpful when there are thousands.\n\n> ## Reading Other Data\n>\n> Read the data in `gapminder_gdp_americas.csv`\n> (which should be in the same directory as `gapminder_gdp_oceania.csv`)\n> into a variable called `americas`\n> and display its summary statistics.\n>\n> > ## Solution\n> > To read in a CSV, we use `pd.read_csv` and pass the filename `'data/gapminder_gdp_americas.csv'` to it.\n> > We also once again pass the column name `'country'` to the parameter `index_col` in order to index by country.\n> > The summary statistics can be displayed with the `DataFrame.describe()` method.\n> > ~~~\n> > americas = pd.read_csv('data/gapminder_gdp_americas.csv', index_col='country')\n> > americas.describe()\n> > ~~~\n> >{: .language-python}\n> {: .solution}\n{: .challenge}\n\n> ## Inspecting Data\n>\n> After reading the data for the Americas,\n> use `help(americas.head)` and `help(americas.tail)`\n> to find out what `DataFrame.head` and `DataFrame.tail` do.\n>\n> 1. What method call will display the first three rows of this data?\n> 2. What method call will display the last three columns of this data?\n> (Hint: you may need to change your view of the data.)\n>\n> > ## Solution\n> > 1. We can check out the first five rows of `americas` by executing `americas.head()`\n> > (allowing us to view the head of the DataFrame). We can specify the number of rows we wish\n> > to see by specifying the parameter `n` in our call\n> > to `americas.head()`. To view the first three rows, execute:\n> >\n> > ~~~\n> > americas.head(n=3)\n> > ~~~\n> > {: .language-python}\n> > ~~~\n> > continent gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 \\\n> > country\n> > Argentina Americas 5911.315053 6856.856212 7133.166023\n> > Bolivia Americas 2677.326347 2127.686326 2180.972546\n> > Brazil Americas 2108.944355 2487.365989 3336.585802\n> >\n> > gdpPercap_1967 gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 \\\n> > country\n> > Argentina 8052.953021 9443.038526 10079.026740 8997.897412\n> > Bolivia 2586.886053 2980.331339 3548.097832 3156.510452\n> > Brazil 3429.864357 4985.711467 6660.118654 7030.835878\n> >\n> > gdpPercap_1987 gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 \\\n> > country\n> > Argentina 9139.671389 9308.418710 10967.281950 8797.640716\n> > Bolivia 2753.691490 2961.699694 3326.143191 3413.262690\n> > Brazil 7807.095818 6950.283021 7957.980824 8131.212843\n> >\n> > gdpPercap_2007\n> > country\n> > Argentina 12779.379640\n> > Bolivia 3822.137084\n> > Brazil 9065.800825\n> > ~~~\n> > {: .output}\n> > 2. To check out the last three rows of `americas`, we would use the command,\n> > `americas.tail(n=3)`, analogous to `head()` used above. However, here we want to look at\n> > the last three columns so we need to change our view and then use `tail()`. To do so, we\n> > create a new DataFrame in which rows and columns are switched:\n> >\n> > ~~~\n> > americas_flipped = americas.T\n> > ~~~\n> > {: .language-python}\n> >\n> > We can then view the last three columns of `americas` by viewing the last three rows\n> > of `americas_flipped`:\n> > ~~~\n> > americas_flipped.tail(n=3)\n> > ~~~\n> > {: .language-python}\n> > ~~~\n> > country Argentina Bolivia Brazil Canada Chile Colombia \\\n> > gdpPercap_1997 10967.3 3326.14 7957.98 28954.9 10118.1 6117.36\n> > gdpPercap_2002 8797.64 3413.26 8131.21 33329 10778.8 5755.26\n> > gdpPercap_2007 12779.4 3822.14 9065.8 36319.2 13171.6 7006.58\n> >\n> > country Costa Rica Cuba Dominican Republic Ecuador ... \\\n> > gdpPercap_1997 6677.05 5431.99 3614.1 7429.46 ...\n> > gdpPercap_2002 7723.45 6340.65 4563.81 5773.04 ...\n> > gdpPercap_2007 9645.06 8948.1 6025.37 6873.26 ...\n> >\n> > country Mexico Nicaragua Panama Paraguay Peru Puerto Rico \\\n> > gdpPercap_1997 9767.3 2253.02 7113.69 4247.4 5838.35 16999.4\n> > gdpPercap_2002 10742.4 2474.55 7356.03 3783.67 5909.02 18855.6\n> > gdpPercap_2007 11977.6 2749.32 9809.19 4172.84 7408.91 19328.7\n> >\n> > country Trinidad and Tobago United States Uruguay Venezuela\n> > gdpPercap_1997 8792.57 35767.4 9230.24 10165.5\n> > gdpPercap_2002 11460.6 39097.1 7727 8605.05\n> > gdpPercap_2007 18008.5 42951.7 10611.5 11415.8\n> > ~~~\n> > {: .output}\n> > \n> > This shows the data that we want, but we may prefer to display three columns instead of three rows,\n> > so we can flip it back:\n> > ~~~\n> > americas_flipped.tail(n=3).T \n> > ~~~\n> > {: .language-python} \n> > __Note:__ we could have done the above in a single line of code by 'chaining' the commands:\n> > ~~~\n> > americas.T.tail(n=3).T\n> > ~~~\n> > {: .language-python}\n> {: .solution}\n{: .challenge}\n\n\n> ## Reading Files in Other Directories\n>\n> The data for your current project is stored in a file called `microbes.csv`,\n> which is located in a folder called `field_data`.\n> You are doing analysis in a notebook called `analysis.ipynb`\n> in a sibling folder called `thesis`:\n>\n> ~~~\n> your_home_directory\n> +-- field_data/\n> | +-- microbes.csv\n> +-- thesis/\n> +-- analysis.ipynb\n> ~~~\n> {: .output}\n>\n> What value(s) should you pass to `read_csv` to read `microbes.csv` in `analysis.ipynb`?\n> \n> > ## Solution\n> > We need to specify the path to the file of interest in the call to `pd.read_csv`. We first need to 'jump' out of\n> > the folder `thesis` using '../' and then into the folder `field_data` using 'field_data/'. Then we can specify the filename `microbes.csv.\n> > The result is as follows:\n> > ~~~\n> > data_microbes = pd.read_csv('../field_data/microbes.csv')\n> > ~~~\n> >{: .language-python}\n> {: .solution}\n{: .challenge}\n\n> ## Writing Data\n> \n> As well as the `read_csv` function for reading data from a file,\n> Pandas provides a `to_csv` function to write dataframes to files.\n> Applying what you've learned about reading from files,\n> write one of your dataframes to a file called `processed.csv`.\n> You can use `help` to get information on how to use `to_csv`.\n> > ## Solution\n> > In order to write the DataFrame `americas` to a file called `processed.csv`, execute the following command:\n> > ~~~\n> > americas.to_csv('processed.csv')\n> > ~~~\n> >{: .language-python}\n> > For help on `to_csv`, you could execute, for example:\n> > ~~~\n> > help(americas.to_csv)\n> > ~~~\n> >{: .language-python}\n> > Note that `help(to_csv)` throws an error! This is a subtlety and is due to the fact that `to_csv` is NOT a function in \n> > and of itself and the actual call is `americas.to_csv`. \n> {: .solution}\n{: .challenge}",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb586a31feb368b0af12352b6972560b3050cba6 | 41,702 | ipynb | Jupyter Notebook | p2/04b_Datenvorverarbeitung.ipynb | fh-swf-hgi/ml | 50a79bbd443f8f3d18b63263c7741c057b734c65 | [
"MIT"
] | null | null | null | p2/04b_Datenvorverarbeitung.ipynb | fh-swf-hgi/ml | 50a79bbd443f8f3d18b63263c7741c057b734c65 | [
"MIT"
] | 1 | 2021-04-07T13:29:13.000Z | 2021-04-07T13:29:13.000Z | p2/04b_Datenvorverarbeitung.ipynb | fh-swf-hgi/ml | 50a79bbd443f8f3d18b63263c7741c057b734c65 | [
"MIT"
] | null | null | null | 32.83622 | 499 | 0.628171 | [
[
[
"<figure>\n <IMG SRC=\"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/Fachhochschule_Südwestfalen_20xx_logo.svg/320px-Fachhochschule_Südwestfalen_20xx_logo.svg.png\" WIDTH=250 ALIGN=\"right\">\n</figure>\n\n# Machine Learning\n### Sommersemester 2021\nProf. Dr. Heiner Giefers",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import *\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.decomposition import PCA\nfrom sklearn.datasets import load_iris\nimport matplotlib.pyplot as plt\nimport wget\nfrom pathlib import Path",
"_____no_output_____"
]
],
[
[
"Fehlende Pakete bitte so nachinstllaieren:\n```python\nimport sys\n!{sys.executable} -m pip install <paketname>\n```",
"_____no_output_____"
],
[
"# Daten Analysieren",
"_____no_output_____"
],
[
"Daten sind die Grundlage von Machine Learning (ML) Algorithmen.\nDebei zählt nicht nur die Masse an Daten, sondern auch, oder vor allem deren Qualität.\nDie Ergebnisse der ML-Algorithmen können nur so gut sein, wie die Qualität der Daten es zulässt.\n\nDaher ist das Verständnis von Daten ein wesentlicher Schritt für jedes ML Projekt.\nLassen Sie uns zunächst einige grundlegende Begriffe von Daten durchgehen:\n\nLiegen die Daten in *strukturierter* Form vor so kann man sie in der Regel als Tabelle oder Matrix beschrieben.\nDie einzelnen Zeilen dieser Tabellen nennt man **Instanzen** oder **Datenpunkte**, bei den Spalten spricht man von **Attributen**, **Merkmalen** oder **Variablen**.\n\n- Ein **Datenpunkt** ist ein Datenpaket, das ein Objekt beschreibt (einen Fall, eine Person, ein Zeitpunkt, ...).\n- Ein **Attribut** ist eine messbare Eigenschaft, anhand derer Objekte beschrieben werden (Größe, Alter, Gewicht, Augenfarbe, ...).\n\nAttribute können **kategorisch** oder **numerisch** sein:\n- **Kategorisch** sind Attribute, deren Wertebereich eine endliche Menge ist (Farbe, Typ, Wochentag, ...)\n - **Ordinal** sind katagorische Attribute mit Ordnung (sehr schlecht, schlecht, zufriedenstellend, gut, sehr gut)\n - **Nominal** sind katagorische Attribute ohne Reihenfolge (grün, blau, gelb)\n- **Numerisch** sind Attribute, die durch Zahlen dargestellt werden (Größe, Gewicht, Temperatur, ...) und innerhalb eines endlichen oder unendlichen Intervalls einen beliebigen Wert annehmen können.\n",
"_____no_output_____"
],
[
"Im Folgenden betrachten wir einen Datensatz für Spielerdaten aus dem FIFA 20-Videospiel:",
"_____no_output_____"
]
],
[
[
"file = Path(\"./players_20.csv\")\nif not file.is_file():\n wget.download(\"https://raw.githubusercontent.com/fh-swf-hgi/ml/main/p2/players_20.csv\")\ndata = pd.read_csv(\"players_20.csv\", encoding = \"ISO-8859-1\")\ndata.head()",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
]
],
[
[
"`age`, `height_cm` und `weight_kg` sind Beispiele für numerische Attribute, `nationality`, `club` und `preferred_foot` sind kategorische Attribute.",
"_____no_output_____"
],
[
"## Visualisierung\nUm einen besseren Eindruck über die Attribute zu erhalten, ist es ratsam, die die Daten zu visualisieren.\nEine umfangreiche und weit verbreitete Python Bibliothek dafür ist `matplotlib`.\n\n### 1. Säulendiagramme (Bar Charts)\nSäulendiagramme sind eine einfache Möglichkeit, um die Häufigkeit bestimmter Werte in **kategorischen Merkmalen** darzustellen\n\nIm folgenden Diagramm wird gezeigt, wie häufig bestimmte Clubs im Datensatz genannt werden:",
"_____no_output_____"
]
],
[
[
"data['club'].value_counts().plot.bar()",
"_____no_output_____"
]
],
[
[
"Bei einer Variante der Säulendiagramme sind die \"Säulen\" horizontal aufgetragen. Man spricht in dem Fall von **Balkendiagrammen** (*column charts*). Diese haben den Vorteil, dass die zu vergleichenden Werte besser lesbar sind.\n\n**Aufgabe** Verwenden Sie die Methode `barh` der `matplotlib` Bibliothek um ein weiteres kategorisches Merkmal darzustellen.",
"_____no_output_____"
]
],
[
[
"# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
]
],
[
[
"### 2. Histogramm\n\nMit einem Histogramm Häufigkeitsverteilung von werten eines **numerischen Merkmals** grafisch dargestellt werden.\n\nDer Wertebereich des Attributs wird dazu in Intervalle (mit normalerweise gleicher Größe) eingeteilt.\nDie Anzahl der Werte, die in einen Bereich hineinfallen, bestimmt dann die Größe der Säulen in einem Säulendiagramm.",
"_____no_output_____"
]
],
[
[
"data['height_cm'].plot.hist()",
"_____no_output_____"
]
],
[
[
"Das obige Histogramm zeigt eine Werteverteilung, die in etw die Form einer Glockenkurve hat. Man kann also vermuten, das die Größe der Spieler eine normalverteilte Variable ist.\n\n**Aufgabe** Plotten Sie die Attribute `potential` und `overall` in ein Histogramm.",
"_____no_output_____"
]
],
[
[
"# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
]
],
[
[
"### 3. Streudiagramm (scatter plot)\n\nEin Streudiagramm zeigt den Zusammenhang von Wertepaaren mehrerer Attribute.\nDer einfachste Fall ist ein zweidimensionales Diagramm in dem die Abhängigkeiten zweier Attribute dargestellt sind.\nJeder Punkt entspricht einem Datenpunkt, wobei die *x* und *y* Koordinaten den Werten der beiden Attribute für diesen Datenpunkt entspricht.\n\nIm folgenden Diagramm vergleichen wir die Größe und das Gewicht der Spieler.\nMan sieht im Diagramm bereits eine Art *Muster*: Je größer ein Spieler ist, desto schwerer ist er **im Allgemeinen** auch.\nWichtig ist hier: Das ist kein *Gesetz* oder eine *Regel* die immer gilt. Es gibt auch Spieler, die größer und gleichzeitig leichter als andere Spieler sind. Solche Fälle sind aber eher die Ausnahme, bzw. es ist weniger wahrscheinlich.",
"_____no_output_____"
]
],
[
[
"data.plot.scatter('height_cm', 'weight_kg')",
"_____no_output_____"
]
],
[
[
"Wenn mehr als 2 Attribute verglichen werden sollen, wird die Darstellung komplizierter.\nEine Möglichkeit ist, von zwei- in den dreidimensionalen Raum zu wechseln.\nEine andere Möglichkeit ist, das Aussehen der einzelnen Punkte (z.B. die Farbe) anhand weiterer Attribute zu variieren.\n\n\n**Aufgabe:** Verwenden Sie den Parameter `c` der `matplotlib`-Funktion `scatter`, um neben `height_cm` und `weight_kg` ein weiteres Attribut (z.B. `overall`) darzustellen.\n",
"_____no_output_____"
]
],
[
[
"# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
]
],
[
[
"### 4. Box Plot\n\nBoxplots sind eine Methode, mit der sich die charakteristischen Eigenschaften eines numerischen Attributs auf kompakte und dennoch übersichtliche Weise darstellen lassen.\n\nDie Box entspricht dem Bereich, in dem die mittleren 50% aller Werte liegen, der Strich in der Box gibt die Position des Medians an.\nDie *Whisker* (auch *Antennen* genannt) geben den Bereich an, in dem die allermeisten Werte liegen.\nPunkte außerhalb dieses Bereichs sind als *Ausreißer* zu werten.\n\n",
"_____no_output_____"
]
],
[
[
"data[['potential','overall']].plot.box()",
"_____no_output_____"
]
],
[
[
"## Ausreißererkennung\n\nEin *Ausreißer* ist ein Wert, der weit entfernt von allen (oder den allermeisten) anderen Werten des gleich Datensatzes liegt.\n\nEine Möglichkeit, solche Ausreißer zu erkennen, ist die Daten zu visualisieren, z.B. mithilfe eines Boxplots.\nIm folgenden Beispiel schauen wir uns das Merkmal `value_eur`, also den Marktwert der Spieler an.\nWie die Angabe an der Die y-Achse zeigt, sind die Werte in $1e7$, also Millionen dargestellt.",
"_____no_output_____"
]
],
[
[
"data['value_eur'].plot.box()",
"_____no_output_____"
]
],
[
[
"Datenpunkte, die Außreißer in mehreren Kategorien darstellen, identifiziert man auch über Streudiagramme.\nWie wir in folgendem Beispiel sehen, besitzt ein Spieler ein sehr hohen Wert in den Kategorien *Spielerwert* (`value_eur`) und *Gehalt* (`wage_eur`).",
"_____no_output_____"
]
],
[
[
"data.plot.scatter('value_eur', 'wage_eur')\nplt.scatter(data['value_eur'][0],data['wage_eur'][0], s=150, edgecolors='k', c='None')",
"_____no_output_____"
]
],
[
[
"Ausreißer können aber auch berechnet werden.\nIm folgenden Beispiel berechnen wir für alle Datenpunkte und Merkmale, welche Werte mehr als drei Standardabweichungen größer als der Mittelwert sind.",
"_____no_output_____"
]
],
[
[
"s = 3*data.std() + data.mean()\n(data.gt(s, axis=1)).head()",
"_____no_output_____"
]
],
[
[
"Wenn wie die Ausreißer erkannt haben, können wir sie aus dem Datensatz entfernen.",
"_____no_output_____"
]
],
[
[
"data_clean = data[(data.gt(s, axis=1)).any(axis=1)==False].copy()",
"_____no_output_____"
],
[
"print(f\"Der ursprüngliche Datensatz hat {data.shape[0]} Zeilen\")\nprint(f\"Der bereinigte Datensatz hat noch {data_clean.shape[0]} Zeilen\")",
"_____no_output_____"
]
],
[
[
"# Datenvorverarbeitung",
"_____no_output_____"
],
[
"Bevor Machine Learning Verfahren eingesetzt werden können, müssen die Datensätze in der Regel sorgfältig vorberarbeitet werden.\nJe nach verwendetem Verfahren kann es nötig sein, bestimmte Vorverarbeitungsschritte durchzuführen.\nDie im Folgenden beschrieben Schritte sind für fast alle Anwendungen notwendig:",
"_____no_output_____"
],
[
"## Data selection\nDatensätze sind häufig groß, in dem Sinne, dass viele Daten für verschiedene Merkmale gesammelt wurden.\nViele Daten zu haben ist prinzipiell auch vorteilhaft, allerdings sollten für Analysen möglichst nur die *relevanten* Daten herangezogen werden.\n\nIn manchen Fällen können Sie die weniger relvanten Merkale direkt identifizieren.\nIn unserem Datensatz nehmen wir einmal an, dass der Spielername und die Positionen eher unwichtig sind. Daher nehmen dir die kompletten Merkmale, also die Spalten in unserem Datensatz, heraus:",
"_____no_output_____"
]
],
[
[
"data_clean.drop(['short_name','player_positions'], axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"Welche Merkmale relevant sind und welche nicht ist allerdings nicht immer einfach zu beantworten.\nDaher ist es ratsam, mathematische Verfahren zu verwenden, mit denen weniger relevante oder sogar überflüssige Merkmale identifiziert werden können.\nMahr dazu aber später...",
"_____no_output_____"
],
[
"## Normalisierung\nDie Wertebereiche der verschiedenen Merkmale können untereinander sehr unterschiedlich sein.\nDas Alter der Spieler wird kaum über $50$ gehen, die Gehälter hingegen fangen erst deutlich über $1000$ an.\nWenn unser Machine Learning Modell darauf angewiesen ist, dass alle Merkmale den selben *Hebel* für das Modell haben, ist es sinnvoll, wenn die Werte aller Merkmale *normalisiert* werden.\n\n\n### 1. Standardisierung\nEin der am häufigsten angewendeten Methoden zur Normalisieurung ist die *Standardisierung*.\nHierbei wird von den Werten der Datenpunkte des Attributs $X$ zunächst der Mittelwert $\\bar X$ abgezogen, sodass die transformierten Daten immer den Mittelwert Null besitzen. Anschließend teilt man den Wert durch die Varianz $\\sigma^2_X$, sodass das transformierte Merkmal $\\hat X$ die Varianz 1 besitzt:\n$$\\hat x\\mapsto \\frac{x - \\bar X}{\\sigma^2_X}$$\n\nMan kann die Standardisierung nach der oben angeeben Formel in Python *per Hand* ausführen, oder bestehende Funktionen verwenden, z.B. die Funktion `scale`aus dem `preprocessing`-Modul der umfangreichen Python-Bibliothek `scikit-klearn` (kurz: `sklearn`).",
"_____no_output_____"
]
],
[
[
"# Selektiere die Spalten mit numerischen Datentypen (int64) in unserem Fall\nncul = data_clean.select_dtypes('int64').columns\n\n# Wende die Formel zur Standardisierung an\ndata_strd = (data_clean[ncul] - data_clean.mean())/data_clean.std()",
"_____no_output_____"
],
[
"# Standardisierung mit sklearn\ndata_skstrd = scale(data_clean[ncul])",
"_____no_output_____"
]
],
[
[
"Wenn wir die die Streuungswerte der neuen Datensätze vergleichen sehen wir, dass die Mittelwerte sehr nahe bei 0 und die Standardabweichung nahe bei 1 liegt.\nAußerdem sind die selbst berechneten Daten sehr nahe an den mit `sklearn` berechneten Daten.\nUnterschiede kommen aufgrund der unterschiedlichen numerischen Berechnungen zustande. ",
"_____no_output_____"
]
],
[
[
"data_strd.describe().loc[['mean', 'std']]",
"_____no_output_____"
],
[
"pd.DataFrame(data_skstrd, columns=data_strd.columns).describe().loc[['mean', 'std']]",
"_____no_output_____"
]
],
[
[
"### 2. Min-Max Normalisierung\nEine, weitere Methode zur Normalisierung der Datenreihen ist die *Min-Max-Skalierung*.\nDie Idee dahinter ist überaus einfach: Zunächst wird von allen Datenpunkten $x$ des Attributs $X$ das Minimum der Attributwerte $\\min_X$ abgezogen. Nach diesem Schritt beginnen alle Wertebereiche der Attribute bei 0.\nAnschließend werden alle Werte durch die Größe des Wertebereichs $\\max_X-\\min_X$ geteilt. Danach sind alle Werte auf den Wertebereich $[0,1]$ skaliert:\n$$\\hat x\\mapsto \\frac{x-\\min_X}{\\max_X-\\min_X}$$\n\nIn `sklearn` heißt die Klasse zur Min-Max Normalisierung `MinMaxScaler`",
"_____no_output_____"
]
],
[
[
"# Min-Max-Skalierung per Hand\ndata_scaled = (data_clean[ncul] - data_clean[ncul].min())/(data_clean[ncul].max()-data_clean[ncul].min())",
"_____no_output_____"
],
[
"# Min-Max-Skalierung mit sklearn\ndata_skscaled = MinMaxScaler().fit_transform(data_clean[ncul])",
"_____no_output_____"
]
],
[
[
"Wir können nun wieder beide Varianten vergleichen:",
"_____no_output_____"
]
],
[
[
"data_scaled.max() - data_scaled.min()",
"_____no_output_____"
],
[
"pd.DataFrame(data_scaled, columns=data_scaled.columns).max() - pd.DataFrame(data_scaled, columns=data_scaled.columns).min()",
"_____no_output_____"
]
],
[
[
"**Aufgabe:** Normieren Sie nach den Prinzipien der Min-Max-Skalierung die Werte unseres Datensatzes auf den den Bereich $[-1,1]$ (statt wie oben auf den Wertebereich $[0,1]$)",
"_____no_output_____"
]
],
[
[
"data_ex = None\n# YOUR CODE HERE\nraise NotImplementedError()\ndata_ex",
"_____no_output_____"
],
[
"# Test Cell\n#----------\nassert (data_ex.max() == 1).all()\nassert (data_ex.min() == -1).all()",
"_____no_output_____"
]
],
[
[
"In `sklearn` kann man dies durch Überschreiben des Attributs `feature_range` des `MinMaxScaler`-Objekts erreichen:",
"_____no_output_____"
]
],
[
[
"data_skex = MinMaxScaler(feature_range=(-1,1)).fit_transform(data_clean[ncul])\ndata_skex.min(axis=0), data_skex.max(axis=0)",
"_____no_output_____"
]
],
[
[
"Nun können wir den ursprünglichen und den normierten Datensatz zusammenfügen:",
"_____no_output_____"
]
],
[
[
"# updating our dataframe\ndata_clean[ncul] = data_scaled\ndata_clean.head()",
"_____no_output_____"
]
],
[
[
"## Encoding\nBisher haben wir ausschließlich die numerischen Merkmale betrachtet, aber noch nicht die kategorischen,\nDie allermeisten ML Verfahren basieren darauf, dass mit den Werten der Attribute *gerechnet* wird.\nDaher ist es in aller Regel notwendig, kategorische Merkmale in numerische zu überführen, also zu *encodieren*.\n\n### 1. Ganzzahlcodierung\nEine Möglichkeit katagorischen Daten in numerische zu überführen besteht darin, jeder *Kategorie* einen eindeutigen (Integer-) Zahlenwert zuzuordnen.\nDiese einfache Methode ist sogar sehr sinnvoll, **aber nur, wenn die kategorialen Variablen ordinal sind**.\nEin gutes Beispiel sind die Schulnoten *sehr gut*, *gut*, *befriedigend*, usw., denen man passenderweise die Werte $1$, $2$, $3$, usw. zuordnen, und dann mit diesen Werten auch sinnvoll rechnen kann.\nSind **nominal**, also ohne erkennbare Ordnung kann die Ganzzahlcodierung zu schlechteren oder gänzlich **unerwarteten Ergebnissen** führen.\nDas liegt vereinfacht gesagt daran, dass die Verfahren aus der numerischen Werten ahängigkeiten herleiten, die in wirklichkeit nicht exisieren.\n \n\nIn `sklearn` kann man eine Ganzzahlcodierung durch die Klasse `OrdinalEncoder` realisieren.",
"_____no_output_____"
]
],
[
[
"# Spalten mit kategorischen Attributen\nccul = ['club','nationality','preferred_foot']\n\ndata_en = data_clean.copy()\ndata_en[ccul] = OrdinalEncoder().fit_transform(data_clean[ccul])\ndata_en.head()",
"_____no_output_____"
]
],
[
[
"Es ist ebenfalls möglich, direkt mit `pandas` kategorische Merkmale zu enkodieren.\nDazu setzt man den Spaltentyp auf auf `category` und wählt als *Encoding* `cat.codes`\n",
"_____no_output_____"
]
],
[
[
"data_clean['club'].astype('category').cat.codes",
"_____no_output_____"
]
],
[
[
"### 2. One-hot Codierung\n\nWir haben ja bereits angesprochen, dass sich für die nominale Merkmale die Ganzzahlkodierung nicht eignet.\nEine sehr verbreitete Transformation, die auch für nominale Attribute verwendet werden kann ist die *one-hot Codierung*.\nHierbei wird ein Merkmal mit $n$ unterschiedlichen Kategorien in ein einen $n$-dimensionalen Vektor überführt.\nJede Position dieses Vektors steht für eine bestimmte Kategorie.\nIst für einen Datenpunkt in diesem Vektor an der Stelle $i$ eine $1$ eingetragen, so besitzt der Datenpunkt für dieses Merkmal die $i$-te Kategorie.\n\nWie man leicht sieht, kann in diesem Vektor nur eine $1$ eingetragen sein, denn der Datenpunkt kann Maximal einer Kategorie zugeordnet sein. Alle anderen Positionen des Vektors sind $0$. Nur eine Eins, daher der Namen *one-hot*.\n\nIn `sklearn` verwendet man die One-hot Codierung über die `OneHotEncoder` Klasse.",
"_____no_output_____"
]
],
[
[
"onehot = OneHotEncoder(sparse=False).fit_transform(data_clean[ccul])\nonehot[:5]",
"_____no_output_____"
]
],
[
[
"Auch hier können wir das gleiche mit `pandas` erreichen, und zwar mit der Funktion `pandas.get_dummies`.\nJedes kategorische Merkmal wird darüber zu $n$ einzelnen Merkmalen expandiert, wobei $n$ die Anzahl der Werte ($=$ Kategorien) des Merkmals ist.",
"_____no_output_____"
]
],
[
[
"data_oh = pd.get_dummies(data_clean)\ndata_oh.head()",
"_____no_output_____"
]
],
[
[
"## Aufteilung der Trainings- und Testdaten\n\nEine typische Vorgehensweise, um ein Machine Learning Modell zu bewerten, ist es, das Modell mit neuen, *neuen* Daten zu testen.\n*Neu* bedeutet hier, dass die Daten für das Trainieren des Modells nicht verwendet wurden.\nKennt man die \"Ergebnisse\" (*Label*) der Testdaten, so kann man sehr genau abschätzen, wie gut das trainierte Modell funktioniert.\n\nAber warum benötigt man hier neue Daten? Wäre es nicht gut, wenn man diese Daten auch für das training benutzen würde, um das Modell noch besser zu entwickeln?\nGanz im Gegenteil: Im Einsatz wird Ihr Modell immer mit unbekannten Daten arbeiten müssen. Es kommt also vor allem darauf an, wie gut das Modell *generalisiert*.\nVerwenden Sie alle Ihre Daten für das Training, so wird das Modell vielleicht sehr gute Ergebnisse liefern; aber eben nur für **diese Daten**. Es ist also *übertrainiert* (engl. *overfit*).\n\nUm *Overfitting* zu verhindern, sollten Sie also immer einen Teil des Datensatzes für das Testen reservieren.\nDieser Teil ist üblicherweise kleiner als der Trainigsdatensatz, wie groß hängt von dem Umfang des Datensatzes ab.\n\nFür große Datensätze (z.B. mit mehr als 1 Mio. Datenpunkten) ist es angebracht, ein kleineres Testset (2%) zu verwenden.\nBei kleineren Datensätzen ist Testdatensatz von $1/3$ bis $1/5$ der Gesamtdaten üblich.\n\n`sklearn` beinhaltet die Funktion `train_test_split` zum automatischen Aufteilen von Daten.",
"_____no_output_____"
]
],
[
[
"train_set, test_set = train_test_split(data_oh, test_size=0.3)\ntrain_set.shape, test_set.shape",
"_____no_output_____"
]
],
[
[
"# Optimierungen\n\nUm die Qualität der Trainingsdaten weiter zu verbessern, können weitere Techniken zum Einsatz kommen.\nEinige von diesen Techniken sollen im FOlgenden kurz vorgestellt werden.\n\n## Deminsionalitätsreduzierung\n\nBei Erfassen von Daten ist man mmeistens nicht \"wählerisch\". Es wird gesammelt, was erfasst werden kann und nicht so sehr darauf geachtet, welche Daten man später eventuell *benötigt*.\nDieser Ansatz ist prinzipiell gut, lässt er doch den größten Spielraum für spätere Analysen.\n\nWenn man nun aber ein gewisse Fragestellung mit einem Datensatz bearbeiten möchte, sind in der Regel nicht alle Attribute des Datensatzes relevant.\nSie beim Training beizubehalten bedeutet aber meistens, einen höheren Zeit- und Ressourcenaufwand beim Training der sich nicht selten in einem schlechter trainierten Modell wiederspiegelt.\nEs ist also empfehlenswert, die Anzahl der Attribute bzw. die *Dimensionalität* des Datensatzes zu reduzieren.\n\nDer bekannteste Algorithmen zur Reduzierung der Dimensionalität, ist die sogenannte Hauptkomponentenanalyse, oder auf Englisch **Principle Component Analysis (PCA)**. PCA ist eine Methode aus der Statistik, mit der man aus Daten mit vielen Merkmalen einige Faktoren extrahiert, die für diese Merkmale bestimmend, bzw. am meisten aussagekräftig sind. PCA kann nicht nur zur Reduzierung von Eingaben verwendet, sondern auch zur Visualisierung von Daten mit hoher Dimension in einer 2D-Grafik.\n\n Wie PCA genau funktioniert, werden wir an dieser Stelle nicht näher beleuchten, aber wir wollen die PCR-Methode auf unser Daten anwenden und die Ergebnisse darstellen.\n `sklearn` stellt die Klasse` PCA` bereit, die Attribute auf eine vorgegebene Zahl `n_components` reduzieren kann. Wir werden unseren Datensatz damit in nur 2 Dimensionen darstellen:\n\n",
"_____no_output_____"
]
],
[
[
"pca = PCA(n_components=2).fit(data_clean[ncul])\ndata_pca = pca.transform(data_clean[ncul])\ndata_pca.shape",
"_____no_output_____"
]
],
[
[
"Wir haben mit dem obigen Code die Dimensionen unseres Datensatzes von 7 auf 2 Reduziert, ohne dabei ein größeres Maß and *Informationen* (oder mathematisch gesehen an *Varianz*) zu verlieren.",
"_____no_output_____"
]
],
[
[
"var = pca.explained_variance_ratio_*100\nprint('Die erste Dimension repräsentiert %.2f%% der uerspruenglichen Varianz' %var[0])\nprint('Die erste Dimension repräsentiert %.2f%% der uerspruenglichen Varianz' %var[1])\nprint('Also sind %.2f%% der urspruenglichen Varianz (=Information) erhalten geblieben.' %var.sum())",
"_____no_output_____"
]
],
[
[
"Men den nunmehr noch 2 Merkmalen lassen sich die Daten im 2D-Raum plotten:",
"_____no_output_____"
]
],
[
[
"plt.scatter(data_pca[:,0], data_pca[:,1])",
"_____no_output_____"
]
],
[
[
"## Whitening\n\nDas sogenannte *Whitening* ist eine lineare Transformation die mithilfe einer Hauptkomponentenanalyse durchgeführt werden kann.\nDabei wird die Varianz der Hauptkomponenten auf $1$ normiert.",
"_____no_output_____"
],
[
"Betrachten wir die Merkmale `height_cm` und `weight_kg` aus unserem Datensatz. Wir sehen, dass wir die Werte unser Datenpunkte bereits *normiert* haben.\nIn beiden Kategorien ist der Wertebereich zwischen $0.0$ und $1.0$.",
"_____no_output_____"
]
],
[
[
"plt.scatter(data_clean['height_cm'], data_clean['weight_kg'])\nplt.axis('equal')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Wenn wir nun die PCA anwenden, beobachten wir eine sehr unterschiedliche Varianz der Hauptkomponenten.",
"_____no_output_____"
]
],
[
[
"pca_wh = PCA(whiten=False).fit_transform(data_clean[['height_cm', 'weight_kg']])\n\nprint(\"Varianz der Hauptkomponenten:\", pca_wh.std(axis=0)**2)\nplt.scatter(pca_wh[:,0], pca_wh[:,1])\nplt.xlabel(\"HK1\")\nplt.ylabel(\"HK2\")\nplt.axis('equal')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Führt man die PCA mit Pre-Whitening aus, wird die Varianz der Hauptkomponenten auf 1 normiert.\nDies kann hilfreich für weitere Verarbeitungschritte der Daten sein.",
"_____no_output_____"
]
],
[
[
"pca_wh = PCA(whiten=True).fit_transform(data_clean[['height_cm', 'weight_kg']])\n\nprint(\"Varianz der Hauptkomponenten:\", pca_wh.std(axis=0)**2)\nplt.scatter(pca_wh[:,0], pca_wh[:,1])\nplt.xlabel(\"HK1\")\nplt.ylabel(\"HK2\")\nplt.axis('equal')\nplt.show()",
"_____no_output_____"
]
],
[
[
"-----\n## Übung\nErproben Sie die vorgestellten Schritte zur Datenvorverarbeitung anhand eines einfachen Datensatzes.\nWir wollen als Beispiel den bekannten `iris` Datensatz verwenden.\nDabei handelt es sich um einen Datensatz mit 150 Beobachtungen von 4 Attributen von insgesammt drei Schwertlilienarten (*Iris setosa*, *Iris virginica* und *Iris versicolor*).\n\nDie Attribute sind jeweils die Breite und die Länge des Kelchblatts (*Sepalum*) und des Kronblatts (*Petalum*).\nAufgrund dieser 4 Werte lassen sich die Schwertlilienarten recht gut *klassifizieren*.\nDer Datensatz wird in vielen Beispielen zum Thema *Data Science* verwendet und ist eine Art *Hello World* des Machine Learnings.\n\nWir laden nun den Datensatz über die *Scikit-Learn* Funktion `sklearn.datasets.load_iris` herunter.\nDanach führen Sie bitte folgende Schritte eigenständig aus:\n1. Normalisieren des Datensatzes `X` mit der Min-Max-Normalisierung\n2. Encodieren der Zielvariablen `y` mit der One-Hot Codierung\n3. Aufteilen des Datensatzes in Trainings- und Testdaten\n4. Reduzieren des Datensatzes `X` auf 2 Attribute mit PCA und Whitening\n5. Ploten des Vorverarbeiteten Datensatzes `X`\n\n",
"_____no_output_____"
]
],
[
[
"iris_data = load_iris()\nX = iris_data.data\ny = iris_data.target.reshape(-1,1)",
"_____no_output_____"
],
[
"OneHotEncoder(sparse=False).fit_transform(y)",
"_____no_output_____"
]
],
[
[
"### Step 1: Normalisieren\nNormalisieren Sie `X` mit der **Min-Max-Normalisierung**:",
"_____no_output_____"
]
],
[
[
"X_norm = None\n# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
],
[
"# Test Cell\n#----------\nassert type(X_norm) == np.ndarray, 'X_norm should be a numpy array containing transformed output'\nassert X_norm.ptp() == 1.\nassert X_norm.shape == X.shape\nassert (X_norm>=0).all(), 'All values must be positive'",
"_____no_output_____"
]
],
[
[
"### Step 2: Encoding\nEncodieren Sie `y` mit der **One-Hot Codierung**:",
"_____no_output_____"
]
],
[
[
"y_en = None\n# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
],
[
"# Test Cell\n#----------\nassert type(y_en) == np.ndarray, 'y_en should be a numpy array containing transformed output'\nassert y_en.shape == (150, 3), 'There should be 3 columns for the 3 classes'\nassert y_en.sum() == 150.\nassert y_en.ptp() == 1.",
"_____no_output_____"
]
],
[
[
"### Step 3: Splitting\nTeilen Sie `X_norm` und `y_en` in einen Test- (`X_test`, `y_test`) und einen Trainingsdatensatz (`X_train`, `y_train`) auf. Der Trainingsdatensatzsoll 70% der Datenpunkte enthalten:",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = [None]*4\n# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
],
[
"# Test Cell\n#----------\nassert X_train.all() in X_norm, 'X_train data is not in X_norm'\nassert X_train.shape[0] == X_norm.shape[0]*0.7, 'The size of training set is not matching 70%'\nassert X_train.shape[0]+X_test.shape[0] == X_norm.shape[0]\nassert y_train.all() in y_en",
"_____no_output_____"
]
],
[
[
"### Schritt 4: Dimensionsreduktion\nReduzieren Sie mittels der Hauptkomponentenanalyse die Datensätze `X_train` und `X_test` jeweils auf 2 Attribute. Verwenden Sie dabei Whitening.",
"_____no_output_____"
]
],
[
[
"X_train2d = None\nX_test2d = None\n# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
],
[
"# Test Cell\n#----------\nassert type(X_train2d) == np.ndarray, 'X_train2d should be a numpy array containing transformed output, not the model'\nassert X_train2d.shape == (105, 2), 'The number of attributes is not 2'\nassert X_test2d.shape == (45, 2), 'The number of attributes is not 2'\nassert np.allclose(X_train2d.std(axis=0).ptp(), 0), 'Attributes have different variances'",
"_____no_output_____"
]
],
[
[
"### Step 5: Visualization\nPlotten Sie den vorverarbeiteten Trainingsdatensatz `X_train2d` mit der Funktion `plt.scatter`.",
"_____no_output_____"
]
],
[
[
"# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
]
],
[
[
"## Quellen:\n[1] M. Berthold, C. Borgelt, F. Höppner and F. Klawonn, Guide to Intelligent Data Analysis, London: Springer-Verlag, 2010. \n[2] J. VanderPlas, Python Data Science Handbook, O'Reilly Media, Inc., 2016. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5872ddf22bda2633a15855207b2cc7dda75934 | 2,238 | ipynb | Jupyter Notebook | example/model.ipynb | Hourout/tensorview | 6a4f1f62aebf15efee08166922eb86196bfbf71e | [
"Apache-2.0"
] | 13 | 2019-06-28T05:56:31.000Z | 2020-08-20T01:33:30.000Z | example/model.ipynb | Hourout/tensorview | 6a4f1f62aebf15efee08166922eb86196bfbf71e | [
"Apache-2.0"
] | null | null | null | example/model.ipynb | Hourout/tensorview | 6a4f1f62aebf15efee08166922eb86196bfbf71e | [
"Apache-2.0"
] | 2 | 2020-05-29T03:47:24.000Z | 2020-06-17T10:03:08.000Z | 22.158416 | 138 | 0.576408 | [
[
[
"import tensorflow as tf\nimport tensorview as tv\nimport linora as la",
"_____no_output_____"
],
[
"model = tf.keras.applications.DenseNet121()\ntv.model.statistics(model)",
"_____no_output_____"
],
[
"image = tf.expand_dims(tf.image.resize(la.image.ImageAug().read_image('pandas.jpg').run(), [224, 224]), 0)",
"_____no_output_____"
],
[
"tv.model.visualize_weights(model, layer_name=['conv3_block6_1_conv', 'conv3_block7_0_bn'])",
"_____no_output_____"
],
[
"tv.model.visualize_layer(model, image, layer_name=['input_1', 'zero_padding2d', 'conv1/conv', 'conv2_block4_1_conv'], jupyter=False)",
"_____no_output_____"
],
[
"tv.model.visualize_heatmaps(model, image, layer_name=['input_1', 'zero_padding2d', 'conv1/conv', 'conv2_block4_1_conv'])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb58af2f454625b2a11d5269407dc91885bc2b53 | 693 | ipynb | Jupyter Notebook | cs224w/code2.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | 1 | 2022-01-18T01:53:34.000Z | 2022-01-18T01:53:34.000Z | cs224w/code2.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | null | null | null | cs224w/code2.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | null | null | null | 18.72973 | 83 | 0.546898 | [
[
[
"import pandas as pd\ndf_1 = pd.read_csv('C:/Users/chanyoung/Desktop/TCDF-master/data/limited.csv')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
cb58bd3479cd4d45995e97d726b75f8db98064b9 | 2,161 | ipynb | Jupyter Notebook | _downloads/plot_davis_club.ipynb | TeamNotJava/networkx-doc | b86736258d5459c17868dce63e626fea66b0e5f4 | [
"BSD-3-Clause"
] | null | null | null | _downloads/plot_davis_club.ipynb | TeamNotJava/networkx-doc | b86736258d5459c17868dce63e626fea66b0e5f4 | [
"BSD-3-Clause"
] | null | null | null | _downloads/plot_davis_club.ipynb | TeamNotJava/networkx-doc | b86736258d5459c17868dce63e626fea66b0e5f4 | [
"BSD-3-Clause"
] | null | null | null | 40.018519 | 831 | 0.596483 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Davis Club\n\n\nDavis Southern Club Women\n\nShows how to make unipartite projections of the graph and compute the\nproperties of those graphs.\n\nThese data were collected by Davis et al. in the 1930s.\nThey represent observed attendance at 14 social events by 18 Southern women.\nThe graph is bipartite (clubs, women).\n\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport networkx as nx\nimport networkx.algorithms.bipartite as bipartite\n\nG = nx.davis_southern_women_graph()\nwomen = G.graph['top']\nclubs = G.graph['bottom']\n\nprint(\"Biadjacency matrix\")\nprint(bipartite.biadjacency_matrix(G, women, clubs))\n\n# project bipartite graph onto women nodes\nW = bipartite.projected_graph(G, women)\nprint('')\nprint(\"#Friends, Member\")\nfor w in women:\n print('%d %s' % (W.degree(w), w))\n\n# project bipartite graph onto women nodes keeping number of co-occurence\n# the degree computed is weighted and counts the total number of shared contacts\nW = bipartite.weighted_projected_graph(G, women)\nprint('')\nprint(\"#Friend meetings, Member\")\nfor w in women:\n print('%d %s' % (W.degree(w, weight='weight'), w))\n\nnx.draw(G)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb58ce7ab1c0c06271df367e027030090af6cb00 | 6,092 | ipynb | Jupyter Notebook | serving.ipynb | AmauryFaure/project_article_1 | b9553e59586c8548640b197d0fec66cd9d4e6cce | [
"MIT"
] | null | null | null | serving.ipynb | AmauryFaure/project_article_1 | b9553e59586c8548640b197d0fec66cd9d4e6cce | [
"MIT"
] | null | null | null | serving.ipynb | AmauryFaure/project_article_1 | b9553e59586c8548640b197d0fec66cd9d4e6cce | [
"MIT"
] | null | null | null | 27.944954 | 260 | 0.570913 | [
[
[
"# Utiliser Ray-Serve pour \"Servir\" le modèle Camembert :\n\nCe Notebook explore l'utilisation de Ray-Serve pour pouvoir créer une API et ainsi pouvoir appeler le modèle Camembert depuis n'importe où. Notamment depuis le site web INSPIRE. ",
"_____no_output_____"
]
],
[
[
"#Importing Librairies\nfrom transformers import CamembertForSequenceClassification,CamembertTokenizer, Trainer\nimport ray\nfrom ray import serve\nimport requests\nimport argparse\nimport torch\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Initilization d'arguments :",
"_____no_output_____"
]
],
[
[
"args=argparse.Namespace()\nuse_gpu = torch.cuda.is_available()\n#Use this line if you want ot use a GPU (if available)\n# args.device = torch.device(\"cuda\" if use_gpu else \"cpu\")\n#Use this one to use the CPU\nargs.device = torch.device(\"cpu\")",
"_____no_output_____"
]
],
[
[
"Démarrez le Client Ray Serve :",
"_____no_output_____"
]
],
[
[
"client=serve.start()",
"2021-03-08 16:13:48,850\tINFO services.py:1172 -- View the Ray dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265\u001b[39m\u001b[22m\n\u001b[2m\u001b[36m(pid=10247)\u001b[0m 2021-03-08 16:13:50,841\tINFO http_state.py:67 -- Starting HTTP proxy with name 'dSsfpl:SERVE_CONTROLLER_ACTOR:SERVE_PROXY_ACTOR-node:192.168.1.25-0' on node 'node:192.168.1.25-0' listening on '127.0.0.1:8000'\n\u001b[2m\u001b[36m(pid=10249)\u001b[0m INFO: Started server process [10249]\n"
]
],
[
[
"On définit ici une classe que l'on va utiliser dans Ray serve. \n\nOn charge dans l'initilialisation le modèle choisi.\n\nLa fonction call permet d'utiliser le modèle sur un input et de retourner la réponse du modèle.",
"_____no_output_____"
]
],
[
[
"class predict_class:\n def __init__(self,args):\n self.args=args\n self.model=CamembertForSequenceClassification.from_pretrained(\"/home/amaury/Documents/project_a1/camembert-v1\")\n self.tokenizer=CamembertTokenizer.from_pretrained(\"camembert-base\")\n\n trainer=Trainer(\n model=self.model\n )\n self.trainer=trainer\n self.model.to(args.device)\n\n def __call__(self,request):\n input=await request.body()\n text=input.decode(\"utf-8\")\n \n tokenized=self.tokenizer(text, padding=True, truncation=True, return_tensors=\"pt\")\n\n result=self.model(**tokenized)\n class_input=np.argmax(result.logits.data.numpy())\n return({\"class\":str(class_input)})",
"_____no_output_____"
],
[
"#Run those lines in case of changes in the class to be able to create a new backend and endpoint.\nclient.delete_endpoint(\"classpredict\")\nclient.delete_backend(\"classpredict\")",
"_____no_output_____"
]
],
[
[
"On crée maintenant l'API proprement dit :",
"_____no_output_____"
]
],
[
[
"# client.create_backend(\"classpredict\", predict_class, args, ray_actor_options={\"num_gpus\": 1})\nclient.create_backend(\"classpredict\", predict_class, args)\nclient.create_endpoint(\"classpredict\",backend=\"classpredict\", route=\"/classpredict\",methods=[\"GET\",\"POST\"])",
"\u001b[2m\u001b[36m(pid=10247)\u001b[0m 2021-03-08 16:14:14,059\tINFO controller.py:178 -- Registering route '/classpredict' to endpoint 'classpredict' with methods '['GET', 'POST']'.\n"
]
],
[
[
"Que l'on peut appeler ici : ",
"_____no_output_____"
]
],
[
[
"payload=\"Bonjour\".encode(\"utf-8\")\nr=requests.post(\"http://127.0.0.1:8000/classpredict\",data=payload)\nr.content",
"\u001b[2m\u001b[36m(pid=10249)\u001b[0m 2021-03-08 16:14:29,975\tINFO router.py:248 -- Endpoint classpredict doesn't exist, waiting for registration.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb58e88bf8b324d168af95b4b70621d45b09aed0 | 18,120 | ipynb | Jupyter Notebook | projects/midcurvenn/prepare_data_1.ipynb | futureseadev/hgwxx7 | 282b370afc7d9c277e6c1f5b31282f14f9236f7b | [
"MIT"
] | 6 | 2018-06-21T09:44:36.000Z | 2021-10-01T18:37:41.000Z | projects/midcurvenn/prepare_data_1.ipynb | futureseadev/hgwxx7 | 282b370afc7d9c277e6c1f5b31282f14f9236f7b | [
"MIT"
] | 15 | 2020-01-28T22:56:15.000Z | 2022-03-11T23:55:52.000Z | projects/midcurvenn/prepare_data_1.ipynb | praveentn/hgwxx7 | 282b370afc7d9c277e6c1f5b31282f14f9236f7b | [
"MIT"
] | 2 | 2018-06-25T16:40:20.000Z | 2021-10-01T18:37:42.000Z | 18,120 | 18,120 | 0.662472 | [
[
[
"## Prepare data",
"_____no_output_____"
]
],
[
[
"# mount google drive & set working directory\n# requires auth (click on url & copy token into text box when prompted)\nfrom google.colab import drive\ndrive.mount(\"/content/gdrive\", force_remount=True)\n\nimport os\nprint(os.getcwd())\n\nos.chdir('/content/gdrive/My Drive/Colab Notebooks/MidcurveNN')\n!pwd",
"Mounted at /content/gdrive\n/content\n/content/gdrive/My Drive/Colab Notebooks/MidcurveNN\n"
],
[
"!pip install drawSVG",
"Collecting drawSVG\n Downloading https://files.pythonhosted.org/packages/ee/a1/ea85ba2b4fff65055bbd7e896dbaea1a636518ececda76492eedfecc653a/drawSvg-1.2.2.tar.gz\nCollecting cairoSVG (from drawSVG)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/fd/97/d0f51b1022aecdc3b77385daea0292f3978ec26fee31e65e8a1592ebeff1/CairoSVG-2.4.0-py3-none-any.whl (50kB)\n\u001b[K |████████████████████████████████| 51kB 6.7MB/s \n\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from drawSVG) (1.16.4)\nRequirement already satisfied: imageio in /usr/local/lib/python3.6/dist-packages (from drawSVG) (2.4.1)\nCollecting cairocffi (from cairoSVG->drawSVG)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/0f/0f/7e21b5ddd31b610e46a879c0d21e222dd0fef428c1fc86bbd2bd57fed8a7/cairocffi-1.0.2.tar.gz (68kB)\n\u001b[K |████████████████████████████████| 71kB 9.0MB/s \n\u001b[?25hCollecting cssselect2 (from cairoSVG->drawSVG)\n Downloading https://files.pythonhosted.org/packages/12/e2/91fcd4cd32545beec6e11628d64d3e20f11b5a95dd1ccf3216fd69f176b7/cssselect2-0.2.1-py2.py3-none-any.whl\nCollecting tinycss2 (from cairoSVG->drawSVG)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/94/2c/4e501f9c351343c8ba10d70b5a7ca97cdab2690af043a6e52ada65b85b6b/tinycss2-1.0.2-py3-none-any.whl (61kB)\n\u001b[K |████████████████████████████████| 71kB 22.3MB/s \n\u001b[?25hRequirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from cairoSVG->drawSVG) (4.3.0)\nRequirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from cairoSVG->drawSVG) (0.6.0)\nRequirement already satisfied: cffi>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from cairocffi->cairoSVG->drawSVG) (1.12.3)\nRequirement already satisfied: setuptools>=39.2.0 in /usr/local/lib/python3.6/dist-packages (from cairocffi->cairoSVG->drawSVG) (41.0.1)\nRequirement already satisfied: webencodings>=0.4 in /usr/local/lib/python3.6/dist-packages (from tinycss2->cairoSVG->drawSVG) (0.5.1)\nRequirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow->cairoSVG->drawSVG) (0.46)\nRequirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi>=1.1.0->cairocffi->cairoSVG->drawSVG) (2.19)\nBuilding wheels for collected packages: drawSVG, cairocffi\n Building wheel for drawSVG (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for drawSVG: filename=drawSvg-1.2.2-cp36-none-any.whl size=19889 sha256=97831bc1d434b316e6b03aea6084fe9d680399ddf7389649d5d3315abfcd5592\n Stored in directory: /root/.cache/pip/wheels/f7/d7/bc/abef999ecd24a56605fe1dcad487857a08fb2fcc90a1ca60ec\n Building wheel for cairocffi (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for cairocffi: filename=cairocffi-1.0.2-cp36-none-any.whl size=88348 sha256=fc83ab1d1c47ab511c7388dbd4fabd85f82eaaddf383f50533b59b0616cdc92e\n Stored in directory: /root/.cache/pip/wheels/e7/5d/6f/fc3c2364dfd3c4cfd15d786b156077c52209d9af45496fdf12\nSuccessfully built drawSVG cairocffi\nInstalling collected packages: cairocffi, tinycss2, cssselect2, cairoSVG, drawSVG\nSuccessfully installed cairoSVG-2.4.0 cairocffi-1.0.2 cssselect2-0.2.1 drawSVG-1.2.2 tinycss2-1.0.2\n"
],
[
"\"\"\"\n Prepare Data: populating input images from raw profile data\n Takes raw data from \"data/raw/*\" files for both, profile shape (shape.dat) as well as midcurve shape (shape.mid)\n Generates raster image files from svg (simple vector graphics)\n Multiple variations are populated using image transformations.\n These images become input for further modeling (stored in \"data/input/*\")\n\"\"\"\nimport os\nimport sys\nimport PIL\nimport json\nimport shutil\nimport numpy as np\nimport PIL.ImageOps\nfrom random import shuffle\nfrom keras.preprocessing.image import img_to_array, load_img, array_to_img\nnp.set_printoptions(threshold=sys.maxsize)\n\nfrom PIL import Image\n",
"Using TensorFlow backend.\n"
],
[
"# working directory\n#wdir = os.getcwd()\nwdir = '/content/gdrive/My Drive/Colab Notebooks/MidcurveNN'\nprint(\"Working directory: \", wdir)",
"_____no_output_____"
],
[
"imdim = 100",
"_____no_output_____"
],
[
"#input_data_folder = wdir + \"\\\\data\\\\sample\"\n#input_data_folder = wdir + \"/data/newinput\"\n#print(\"input data dir: \", input_data_folder)",
"_____no_output_____"
],
[
"raw_data_folder = \"data/new_shapes\"\ninput_data_folder = \"data/new_images\"\npix2pix_data_folder = \"/data/pix2pix/datasets/pix2pix\"",
"_____no_output_____"
],
[
"def read_dat_files(datafolder=raw_data_folder):\n profiles_dict_list = []\n for file in os.listdir(datafolder):\n if os.path.isdir(os.path.join(datafolder, file)):\n continue\n filename = file.split(\".\")[0]\n profile_dict = get_profile_dict(filename,profiles_dict_list) \n if file.endswith(\".dat\"):\n with open(os.path.join(datafolder, file)) as f:\n profile_dict['Profile'] = [tuple(map(float, i.split('\\t'))) for i in f] \n if file.endswith(\".mid\"):\n with open(os.path.join(datafolder, file)) as f:\n profile_dict['Midcurve'] = [tuple(map(float, i.split('\\t'))) for i in f]\n \n profiles_dict_list.append(profile_dict)\n return profiles_dict_list\n \n\ndef get_profile_dict(shapename,profiles_dict_list):\n for i in profiles_dict_list:\n if i['ShapeName'] == shapename:\n return i\n profile_dict = {}\n profile_dict['ShapeName'] = shapename\n return profile_dict\n\n\nimport drawSvg as draw\n\ndef create_image_file(fieldname,profile_dict,datafolder=input_data_folder,imgsize=imdim, isOpenClose=True):\n d = draw.Drawing(imgsize, imgsize, origin='center')\n profilepoints = []\n for tpl in profile_dict[fieldname]:\n profilepoints.append(tpl[0])\n profilepoints.append(tpl[1])\n d.append(draw.Lines(profilepoints[0],profilepoints[1],*profilepoints,close=isOpenClose,fill='none',stroke='black'))\n \n shape = profile_dict['ShapeName']\n# d.saveSvg(datafolder+\"/\"+shape+'.svg')\n# d.savePng(datafolder+\"/\"+shape+'_'+fieldname+'.png')\n d.savePng(datafolder+\"/\"+shape+'_'+fieldname+'.png')\n\n\ndef get_original_png_files(datafolder=input_data_folder):\n pngfilenames = []\n for file in os.listdir(datafolder):\n fullpath = os.path.join(datafolder, file)\n if os.path.isdir(fullpath):\n continue\n if file.endswith(\".png\") and file.find(\"_rotated_\") == -1 and file.find(\"_translated_\")==-1 and file.find(\"_mirrored_\")==-1:\n pngfilenames.append(fullpath)\n return pngfilenames\n\n\ndef mirror_images(pngfilenames, mode=PIL.Image.TRANSPOSE):\n mirrored_filenames = []\n for fullpath in pngfilenames:\n picture= Image.open(fullpath)\n newfilename = fullpath.replace(\".png\", \"_mirrored_\"+str(mode)+\".png\")\n picture.transpose(mode).save(newfilename)\n mirrored_filenames.append(newfilename)\n return mirrored_filenames\n\n\ndef rotate_images(pngfilenames, angle=90):\n for fullpath in pngfilenames:\n picture= Image.open(fullpath)\n newfilename = fullpath.replace(\".png\", \"_rotated_\"+str(angle)+\".png\")\n picture.rotate(angle).save(newfilename)\n\n\ndef translate_images(pngfilenames, dx=1,dy=1):\n for fullpath in pngfilenames:\n picture= Image.open(fullpath)\n x_shift = dx\n y_shift = dy\n a = 1\n b = 0\n c = x_shift #left/right (i.e. 5/-5)\n d = 0\n e = 1\n f = y_shift #up/down (i.e. 5/-5)\n translate = picture.transform(picture.size, Image.AFFINE, (a, b, c, d, e, f))\n# # Calculate the size after cropping\n# size = (translate.size[0] - x_shift, translate.size[1] - y_shift)\n# # Crop to the desired size\n# translate = translate.transform(size, Image.EXTENT, (0, 0, size[0], size[1]))\n newfilename = fullpath.replace(\".png\", \"_translated_\"+str(dx)+\"_\"+str(dy)+\".png\")\n translate.save(newfilename)\n",
"_____no_output_____"
],
[
"def generate_images(datafolder=input_data_folder):\n \n if not os.path.exists(datafolder):\n os.makedirs(datafolder) \n else: \n for file in os.listdir(datafolder):\n if file.endswith(\".png\") and (file.find(\"_rotated_\") != -1 or file.find(\"_translated_\") !=-1):\n print(\"files already present, not generating...\")\n return\n \n print(\"transformed files not present, generating...\")\n profiles_dict_list = read_dat_files()\n \n print(profiles_dict_list)\n \n for profile_dict in profiles_dict_list:\n create_image_file('Profile',profile_dict,datafolder,imdim,True)\n create_image_file('Midcurve',profile_dict,datafolder,imdim,False)\n \n pngfilenames = get_original_png_files(datafolder)\n mirrored_filenames_left_right = mirror_images(pngfilenames, PIL.Image.FLIP_LEFT_RIGHT)\n mirrored_filenames_top_bottom = mirror_images(pngfilenames, PIL.Image.FLIP_TOP_BOTTOM)\n mirrored_filenames_transpose = mirror_images(pngfilenames, PIL.Image.TRANSPOSE)\n \n files_list_list = [pngfilenames,mirrored_filenames_left_right,mirrored_filenames_top_bottom,mirrored_filenames_transpose]\n for filelist in files_list_list:\n for angle in range(30,360,30):\n rotate_images(filelist,angle)\n \n for dx in range(5,21,5):\n for dy in range(5,21,5):\n translate_images(filelist,dx,-dy)",
"_____no_output_____"
],
[
"generate_images()",
"transformed files not present, generating...\n[{'ShapeName': 'Plus', 'Profile': [(4.0, 8.0), (4.0, 12.0), (10.0, 12.0), (10.0, 18.0), (14.0, 18.0), (14.0, 12.0), (20.0, 12.0), (20.0, 8.0), (14.0, 8.0), (14.0, 2.0), (10.0, 2.0), (10.0, 8.0)], 'Midcurve': [(4.0, 10.0), (12.0, 10.0), (20.0, 10.0), (12.0, 10.0), (12.0, 2.0), (12.0, 18.0)]}, {'ShapeName': 'Plus', 'Profile': [(4.0, 8.0), (4.0, 12.0), (10.0, 12.0), (10.0, 18.0), (14.0, 18.0), (14.0, 12.0), (20.0, 12.0), (20.0, 8.0), (14.0, 8.0), (14.0, 2.0), (10.0, 2.0), (10.0, 8.0)], 'Midcurve': [(4.0, 10.0), (12.0, 10.0), (20.0, 10.0), (12.0, 10.0), (12.0, 2.0), (12.0, 18.0)]}, {'ShapeName': 'SqLu', 'Profile': [(4.0, 4.0), (4.0, 16.0), (12.0, 16.0), (12.0, 12.0), (18.0, 12.0), (18.0, 4.0)], 'Midcurve': [(8.0, 16.0), (8.0, 8.0), (18.0, 8.0), (8.0, 8.0)]}, {'ShapeName': 'SqLu', 'Profile': [(4.0, 4.0), (4.0, 16.0), (12.0, 16.0), (12.0, 12.0), (18.0, 12.0), (18.0, 4.0)], 'Midcurve': [(8.0, 16.0), (8.0, 8.0), (18.0, 8.0), (8.0, 8.0)]}, {'ShapeName': 'Luvw', 'Profile': [(5.0, 5.0), (10.0, 5.0), (10.0, 20.0), (20.0, 20.0), (20.0, 25.0), (5.0, 25.0)], 'Midcurve': [(7.5, 5.0), (7.5, 22.5), (20.0, 22.5), (7.5, 22.5)]}, {'ShapeName': 'Luvw', 'Profile': [(5.0, 5.0), (10.0, 5.0), (10.0, 20.0), (20.0, 20.0), (20.0, 25.0), (5.0, 25.0)], 'Midcurve': [(7.5, 5.0), (7.5, 22.5), (20.0, 22.5), (7.5, 22.5)]}]\n"
],
[
"# wait till all images are generated before executing the next cell\nbreak",
"_____no_output_____"
],
[
"# move images to appropriate directories\n# directory names follows the shape name\n\nimport os\nimport shutil\n\nsrcpath = input_data_folder\ndestpath = input_data_folder\n\nfor root, subFolders, files in os.walk(srcpath):\n for file in files:\n #print(file)\n subFolder = os.path.join(destpath, file[:4])\n if not os.path.isdir(subFolder):\n os.makedirs(subFolder)\n try:\n shutil.move(os.path.join(root, file), subFolder)\n except:\n pass",
"_____no_output_____"
],
[
"print(wdir)\n\n# move images from temporary directory to actual\n# directory names follows the shape name\n\nsrc_shapes = wdir + \"/data/new_shapes/\"\nsrc_images = wdir + \"/data/new_images/\"\n\ndest_shapes = wdir + \"/data/shapes/\"\ndest_images = wdir + \"/data/images/\"\n\n\nfiles = os.listdir(src_shapes)\nfor f in files:\n shutil.move(src_shapes+f, dest_shapes)\n \nfiles = os.listdir(src_images)\nfor f in files:\n shutil.move(src_images+f, dest_images)\n\n",
"/content/gdrive/My Drive/Colab Notebooks/MidcurveNN\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb58eb8f9baea5d3fc7440465efd761fc368facb | 51,526 | ipynb | Jupyter Notebook | 01_Data_Preprocessing/1_2_1_data_convert-NASDAQ.ipynb | zjzsu2000/CMPE297_AdvanceDL_Project | be0de6c38b9d446d515ac7f6a49824934086071d | [
"MIT"
] | 1 | 2020-10-19T03:56:27.000Z | 2020-10-19T03:56:27.000Z | 01_Data_Preprocessing/1_2_1_data_convert-NASDAQ.ipynb | zjzsu2000/CMPE297_AdvanceDL_Project | be0de6c38b9d446d515ac7f6a49824934086071d | [
"MIT"
] | null | null | null | 01_Data_Preprocessing/1_2_1_data_convert-NASDAQ.ipynb | zjzsu2000/CMPE297_AdvanceDL_Project | be0de6c38b9d446d515ac7f6a49824934086071d | [
"MIT"
] | null | null | null | 32.447103 | 1,315 | 0.512479 | [
[
[
"import io\nimport os\nimport pandas as pd",
"_____no_output_____"
],
[
"data_path = 'E:\\\\BaiduYunDownload\\\\optiondata3\\\\'",
"_____no_output_____"
]
],
[
[
"## Definitions\n* Underlying\tThe stock, index, or ETF symbol\n* Underlying_last\tThe last traded price at the time of the option quote.\n* Exchange\tThe exchange of the quote – Asterisk(*) represents a consolidated price of all exchanges and is the most common value.*\n* Optionsymbol\tThe option symbol. Note that in the format starting 2010 this will be longer than 18 characters, depending on the length of the underlying. Blank This item is always blank, to preserve continuity with the older format. It is always blank. So if you are importing this into a database, either do not import this column, or make the field nullable.\n* Optiontype\tCall or put Expiration The expiration date of the option.\n* Expiration date\tThe date of the expiration\n* Quotedate\tThe date and time of the quote. Most of the time, the time will be 4:00 PM. This only means that it is at the close, even though some options trade until 4:15 PM EST\n* Strike\tThe strike of the option\n* Last\tThe last traded price of the option which could even be from a previous day.\n* Bid\tThe bid price of the option\n* Ask\tThe ask price of the option\n* Volume\tThe number of contracts traded\n* Open interest\tOpen Interest – always a day behind. The OCC changes this number at 3:00AM every morning and the number does not change through the day\n* BELOW THIS LINE, THESE COLUMNS NOT CONTAINED IN BARE BONES PRODUCTS\n* Implied volatility\tThe implied volatility (a measure of the estimate of how much the price could change. A high number means that traders believe the option could make a large change)\n* Delta\tThe delta. (a measure of how much the option price would change in relation to the underlying stock price. A delta of .50 means the option would change 50 cents for every 1 dollar the stock moves)\n* Gamma\tThe gamma. (a measure of how fast the Delta will change when the stock price changes. A high number means this is a very explosive option, and could gain or loss value quickly)\n* Theta\tThe theta (a measure of how fast the option is losing value per day due to time decay. As the expiration day arrives, the theta increases)\n* Vega\tThe vega (a measure of how sensitive the option price is to a change in the implied volatility. Options that are way out of the money, or have a long time until expiration are more sensitive to a change in implied volatility)\n* Alias\tIf possible, the old name of the option. Because of the 2010 OSI Symbology, it is important to know what the old symbol name was during the 2010 switch over. If this can be determined, it will list the old name, otherwise it will display the same value as the option symbol. The Alias column has no usage outside of 2010.",
"_____no_output_____"
]
],
[
[
"columns= ['UnderlyingSymbol','UnderlyingPrice','Exchange','OptionSymbol','Blank','Type','Expiration', 'DataDate','Strike','Last','Bid','Ask','Volume','OpenInterest','IV','Delta','Gamma','Theta','Vega','Alias']",
"_____no_output_____"
],
[
"print(columns)",
"['UnderlyingSymbol', 'UnderlyingPrice', 'Exchange', 'OptionSymbol', 'Blank', 'Type', 'Expiration', 'DataDate', 'Strike', 'Last', 'Bid', 'Ask', 'Volume', 'OpenInterest', 'IV', 'Delta', 'Gamma', 'Theta', 'Vega', 'Alias']\n"
],
[
"test=pd.read_csv(data_path+\"\\\\201801\\\\options_20180102.csv\", header=None, \n names=columns)\nsymbols = ['AMD', 'AMED', 'ATLC', 'BLFS', 'CROX', 'DXCM', 'FATE', 'FIVN',\n 'FRPT', 'HZNP', 'JYNT', 'LPSN', 'LULU', 'MRTX', 'NEO', 'NSTG',\n 'PCTY', 'PDEX', 'PTCT', 'QDEL', 'REGI', 'RGEN', 'SPSC', 'STAA',\n 'VCYT', 'VICR', 'WIX']#from 3 years data NASDAQ clustering",
"_____no_output_____"
],
[
"df= None\nfor path in os.listdir(data_path):\n for file in os.listdir(data_path+'/'+path):\n print('reading file'+file)\n df_one = pd.read_csv(data_path+'/'+path+'/'+file,\n header=None, names=columns)\n df_one = df_one[df_one['UnderlyingSymbol'].isin(symbols)]\n print(df_one.shape)\n if df is None:\n df= df_one\n print(df.shape)\n continue\n #print(df_one.head()) \n df = pd.concat([df,df_one],axis=0)\n print(df.shape)",
"reading fileoptions_20180102.csv\n(3530, 20)\n(3530, 20)\nreading fileoptions_20180103.csv\n(3530, 20)\n(7060, 20)\nreading fileoptions_20180104.csv\n(3654, 20)\n(10714, 20)\nreading fileoptions_20180105.csv\n(3680, 20)\n(14394, 20)\nreading fileoptions_20180108.csv\n(3516, 20)\n(17910, 20)\nreading fileoptions_20180109.csv\n(3516, 20)\n(21426, 20)\nreading fileoptions_20180110.csv\n(3516, 20)\n(24942, 20)\nreading fileoptions_20180111.csv\n(3618, 20)\n(28560, 20)\nreading fileoptions_20180112.csv\n(3618, 20)\n(32178, 20)\nreading fileoptions_20180116.csv\n(3460, 20)\n(35638, 20)\nreading fileoptions_20180117.csv\n(3460, 20)\n(39098, 20)\nreading fileoptions_20180118.csv\n(3468, 20)\n(42566, 20)\nreading fileoptions_20180119.csv\n(3500, 20)\n(46066, 20)\nreading fileoptions_20180122.csv\n(3350, 20)\n(49416, 20)\nreading fileoptions_20180123.csv\n(3370, 20)\n(52786, 20)\nreading fileoptions_20180124.csv\n(3384, 20)\n(56170, 20)\nreading fileoptions_20180125.csv\n(3506, 20)\n(59676, 20)\nreading fileoptions_20180126.csv\n(3506, 20)\n(63182, 20)\nreading fileoptions_20180129.csv\n(3344, 20)\n(66526, 20)\nreading fileoptions_20180130.csv\n(3390, 20)\n(69916, 20)\nreading fileoptions_20180131.csv\n(3390, 20)\n(73306, 20)\nreading fileoptions_20180201.csv\n(3550, 20)\n(76856, 20)\nreading fileoptions_20180202.csv\n(3550, 20)\n(80406, 20)\nreading fileoptions_20180205.csv\n(3402, 20)\n(83808, 20)\nreading fileoptions_20180206.csv\n(3402, 20)\n(87210, 20)\nreading fileoptions_20180207.csv\n(3402, 20)\n(90612, 20)\nreading fileoptions_20180208.csv\n(3570, 20)\n(94182, 20)\nreading fileoptions_20180209.csv\n(3570, 20)\n(97752, 20)\nreading fileoptions_20180212.csv\n(3470, 20)\n(101222, 20)\nreading fileoptions_20180213.csv\n(3496, 20)\n(104718, 20)\nreading fileoptions_20180214.csv\n(3518, 20)\n(108236, 20)\nreading fileoptions_20180215.csv\n(3826, 20)\n(112062, 20)\nreading fileoptions_20180216.csv\n(3856, 20)\n(115918, 20)\nreading fileoptions_20180220.csv\n(3816, 20)\n(119734, 20)\nreading fileoptions_20180221.csv\n(3836, 20)\n(123570, 20)\nreading fileoptions_20180222.csv\n(4040, 20)\n(127610, 20)\nreading fileoptions_20180223.csv\n(4090, 20)\n(131700, 20)\nreading fileoptions_20180226.csv\n(3868, 20)\n(135568, 20)\nreading fileoptions_20180227.csv\n(3868, 20)\n(139436, 20)\nreading fileoptions_20180228.csv\n(3946, 20)\n(143382, 20)\nreading fileoptions_20180301.csv\n(4112, 20)\n(147494, 20)\nreading fileoptions_20180302.csv\n(4140, 20)\n(151634, 20)\nreading fileoptions_20180305.csv\n(3904, 20)\n(155538, 20)\nreading fileoptions_20180306.csv\n(3928, 20)\n(159466, 20)\nreading fileoptions_20180307.csv\n(3942, 20)\n(163408, 20)\nreading fileoptions_20180308.csv\n(4170, 20)\n(167578, 20)\nreading fileoptions_20180309.csv\n(4190, 20)\n(171768, 20)\nreading fileoptions_20180312.csv\n(3980, 20)\n(175748, 20)\nreading fileoptions_20180313.csv\n(4008, 20)\n(179756, 20)\nreading fileoptions_20180314.csv\n(4008, 20)\n(183764, 20)\nreading fileoptions_20180315.csv\n(4020, 20)\n(187784, 20)\nreading fileoptions_20180316.csv\n(4068, 20)\n(191852, 20)\nreading fileoptions_20180319.csv\n(3916, 20)\n(195768, 20)\nreading fileoptions_20180320.csv\n(3976, 20)\n(199744, 20)\nreading fileoptions_20180321.csv\n(4002, 20)\n(203746, 20)\nreading fileoptions_20180322.csv\n(4218, 20)\n(207964, 20)\nreading fileoptions_20180323.csv\n(4232, 20)\n(212196, 20)\nreading fileoptions_20180326.csv\n(3990, 20)\n(216186, 20)\nreading fileoptions_20180327.csv\n(3990, 20)\n(220176, 20)\nreading fileoptions_20180328.csv\n(4054, 20)\n(224230, 20)\nreading fileoptions_20180329.csv\n(4292, 20)\n(228522, 20)\nreading fileoptions_20180402.csv\n(4020, 20)\n(232542, 20)\nreading fileoptions_20180403.csv\n(4020, 20)\n(236562, 20)\nreading fileoptions_20180404.csv\n(4022, 20)\n(240584, 20)\nreading fileoptions_20180405.csv\n(4204, 20)\n(244788, 20)\nreading fileoptions_20180406.csv\n(4246, 20)\n(249034, 20)\nreading fileoptions_20180409.csv\n(3970, 20)\n(253004, 20)\nreading fileoptions_20180410.csv\n(3970, 20)\n(256974, 20)\nreading fileoptions_20180411.csv\n(3970, 20)\n(260944, 20)\nreading fileoptions_20180412.csv\n(4246, 20)\n(265190, 20)\nreading fileoptions_20180413.csv\n(4270, 20)\n(269460, 20)\nreading fileoptions_20180416.csv\n(4036, 20)\n(273496, 20)\nreading fileoptions_20180417.csv\n(4072, 20)\n(277568, 20)\nreading fileoptions_20180418.csv\n(4090, 20)\n(281658, 20)\nreading fileoptions_20180419.csv\n(4158, 20)\n(285816, 20)\nreading fileoptions_20180420.csv\n(4178, 20)\n(289994, 20)\nreading fileoptions_20180423.csv\n(4094, 20)\n(294088, 20)\nreading fileoptions_20180424.csv\n(4146, 20)\n(298234, 20)\nreading fileoptions_20180425.csv\n(4192, 20)\n(302426, 20)\nreading fileoptions_20180426.csv\n(4402, 20)\n(306828, 20)\nreading fileoptions_20180427.csv\n(4494, 20)\n(311322, 20)\nreading fileoptions_20180430.csv\n(4230, 20)\n(315552, 20)\nreading fileoptions_20180501.csv\n(4274, 20)\n(319826, 20)\nreading fileoptions_20180502.csv\n(4288, 20)\n(324114, 20)\nreading fileoptions_20180503.csv\n(4504, 20)\n(328618, 20)\nreading fileoptions_20180504.csv\n(4540, 20)\n(333158, 20)\nreading fileoptions_20180507.csv\n(4306, 20)\n(337464, 20)\nreading fileoptions_20180508.csv\n(4306, 20)\n(341770, 20)\nreading fileoptions_20180509.csv\n(4316, 20)\n(346086, 20)\nreading fileoptions_20180510.csv\n(4482, 20)\n(350568, 20)\nreading fileoptions_20180511.csv\n(4528, 20)\n(355096, 20)\nreading fileoptions_20180514.csv\n(4272, 20)\n(359368, 20)\nreading fileoptions_20180515.csv\n(4286, 20)\n(363654, 20)\nreading fileoptions_20180516.csv\n(4286, 20)\n(367940, 20)\nreading fileoptions_20180517.csv\n(4302, 20)\n(372242, 20)\nreading fileoptions_20180518.csv\n(4356, 20)\n(376598, 20)\nreading fileoptions_20180521.csv\n(4216, 20)\n(380814, 20)\nreading fileoptions_20180522.csv\n(4216, 20)\n(385030, 20)\nreading fileoptions_20180523.csv\n(4218, 20)\n(389248, 20)\nreading fileoptions_20180524.csv\n(4480, 20)\n(393728, 20)\nreading fileoptions_20180525.csv\n(4532, 20)\n(398260, 20)\nreading fileoptions_20180529.csv\n(4254, 20)\n(402514, 20)\nreading fileoptions_20180530.csv\n(4256, 20)\n(406770, 20)\nreading fileoptions_20180531.csv\n(4468, 20)\n(411238, 20)\nreading fileoptions_20180601.csv\n(4578, 20)\n(415816, 20)\nreading fileoptions_20180604.csv\n(4294, 20)\n(420110, 20)\nreading fileoptions_20180605.csv\n(4362, 20)\n(424472, 20)\nreading fileoptions_20180606.csv\n(4464, 20)\n(428936, 20)\nreading fileoptions_20180607.csv\n(4794, 20)\n(433730, 20)\nreading fileoptions_20180608.csv\n(4840, 20)\n(438570, 20)\nreading fileoptions_20180611.csv\n(4538, 20)\n(443108, 20)\nreading fileoptions_20180612.csv\n(4578, 20)\n(447686, 20)\nreading fileoptions_20180613.csv\n(4578, 20)\n(452264, 20)\nreading fileoptions_20180614.csv\n(4616, 20)\n(456880, 20)\nreading fileoptions_20180615.csv\n(4692, 20)\n(461572, 20)\nreading fileoptions_20180618.csv\n(4440, 20)\n(466012, 20)\nreading fileoptions_20180619.csv\n(4724, 20)\n(470736, 20)\nreading fileoptions_20180620.csv\n(4750, 20)\n(475486, 20)\nreading fileoptions_20180621.csv\n(4934, 20)\n(480420, 20)\nreading fileoptions_20180622.csv\n(4934, 20)\n(485354, 20)\nreading fileoptions_20180625.csv\n(4610, 20)\n(489964, 20)\nreading fileoptions_20180626.csv\n(4622, 20)\n(494586, 20)\nreading fileoptions_20180627.csv\n(4622, 20)\n(499208, 20)\nreading fileoptions_20180628.csv\n(4824, 20)\n(504032, 20)\nreading fileoptions_20180629.csv\n(4824, 20)\n(508856, 20)\nreading fileoptions_20180703.csv\n(4528, 20)\n(513384, 20)\nreading fileoptions_20180705.csv\n(4708, 20)\n(518092, 20)\nreading fileoptions_20180706.csv\n(4708, 20)\n(522800, 20)\nreading fileoptions_20180709.csv\n(4430, 20)\n(527230, 20)\nreading fileoptions_20180710.csv\n(4460, 20)\n(531690, 20)\nreading fileoptions_20180711.csv\n(4460, 20)\n(536150, 20)\nreading fileoptions_20180712.csv\n(4674, 20)\n(540824, 20)\nreading fileoptions_20180713.csv\n(4694, 20)\n(545518, 20)\nreading fileoptions_20180716.csv\n(4420, 20)\n(549938, 20)\nreading fileoptions_20180717.csv\n(4420, 20)\n(554358, 20)\nreading fileoptions_20180718.csv\n(4428, 20)\n(558786, 20)\nreading fileoptions_20180719.csv\n(4452, 20)\n(563238, 20)\nreading fileoptions_20180720.csv\n(4504, 20)\n(567742, 20)\nreading fileoptions_20180723.csv\n(4522, 20)\n(572264, 20)\nreading fileoptions_20180724.csv\n(4522, 20)\n(576786, 20)\nreading fileoptions_20180725.csv\n(4578, 20)\n(581364, 20)\nreading fileoptions_20180726.csv\n(4834, 20)\n(586198, 20)\nreading fileoptions_20180727.csv\n(4860, 20)\n(591058, 20)\nreading fileoptions_20180730.csv\n(4590, 20)\n(595648, 20)\nreading fileoptions_20180731.csv\n(4590, 20)\n(600238, 20)\nreading fileoptions_20180801.csv\n(4624, 20)\n(604862, 20)\nreading fileoptions_20180802.csv\n(4886, 20)\n(609748, 20)\nreading fileoptions_20180803.csv\n(4932, 20)\n(614680, 20)\nreading fileoptions_20180806.csv\n(4684, 20)\n(619364, 20)\nreading fileoptions_20180807.csv\n(4736, 20)\n(624100, 20)\nreading fileoptions_20180808.csv\n(4752, 20)\n(628852, 20)\nreading fileoptions_20180809.csv\n(4952, 20)\n(633804, 20)\nreading fileoptions_20180810.csv\n(5024, 20)\n(638828, 20)\nreading fileoptions_20180813.csv\n(4796, 20)\n(643624, 20)\nreading fileoptions_20180814.csv\n(4796, 20)\n(648420, 20)\nreading fileoptions_20180815.csv\n(4836, 20)\n(653256, 20)\nreading fileoptions_20180816.csv\n(4848, 20)\n(658104, 20)\nreading fileoptions_20180817.csv\n(4892, 20)\n(662996, 20)\nreading fileoptions_20180820.csv\n(4696, 20)\n(667692, 20)\nreading fileoptions_20180821.csv\n(4788, 20)\n(672480, 20)\nreading fileoptions_20180822.csv\n(4818, 20)\n(677298, 20)\nreading fileoptions_20180823.csv\n(5094, 20)\n(682392, 20)\nreading fileoptions_20180824.csv\n(5410, 20)\n(687802, 20)\nreading fileoptions_20180827.csv\n(5134, 20)\n(692936, 20)\nreading fileoptions_20180828.csv\n(5372, 20)\n(698308, 20)\nreading fileoptions_20180829.csv\n(5372, 20)\n(703680, 20)\nreading fileoptions_20180830.csv\n(5578, 20)\n(709258, 20)\nreading fileoptions_20180831.csv\n(5640, 20)\n(714898, 20)\nreading fileoptions_20180904.csv\n(5386, 20)\n(720284, 20)\nreading fileoptions_20180905.csv\n(5438, 20)\n(725722, 20)\nreading fileoptions_20180906.csv\n(5660, 20)\n(731382, 20)\nreading fileoptions_20180907.csv\n(5740, 20)\n(737122, 20)\nreading fileoptions_20180910.csv\n(5458, 20)\n(742580, 20)\nreading fileoptions_20180911.csv\n(5458, 20)\n(748038, 20)\nreading fileoptions_20180912.csv\n(5638, 20)\n(753676, 20)\nreading fileoptions_20180913.csv\n(5846, 20)\n(759522, 20)\nreading fileoptions_20180914.csv\n(6036, 20)\n(765558, 20)\nreading fileoptions_20180917.csv\n(6114, 20)\n(771672, 20)\nreading fileoptions_20180918.csv\n(6174, 20)\n(777846, 20)\nreading fileoptions_20180919.csv\n(6268, 20)\n(784114, 20)\nreading fileoptions_20180920.csv\n(6288, 20)\n(790402, 20)\nreading fileoptions_20180921.csv\n(6288, 20)\n(796690, 20)\nreading fileoptions_20180924.csv\n(5822, 20)\n(802512, 20)\nreading fileoptions_20180925.csv\n(5850, 20)\n(808362, 20)\nreading fileoptions_20180926.csv\n(5892, 20)\n(814254, 20)\nreading fileoptions_20180927.csv\n(6110, 20)\n(820364, 20)\nreading fileoptions_20180928.csv\n(6128, 20)\n(826492, 20)\nreading fileoptions_20181001.csv\n(5750, 20)\n(832242, 20)\nreading fileoptions_20181002.csv\n(5778, 20)\n(838020, 20)\nreading fileoptions_20181003.csv\n(5830, 20)\n(843850, 20)\nreading fileoptions_20181004.csv\n(6042, 20)\n(849892, 20)\nreading fileoptions_20181005.csv\n(6042, 20)\n(855934, 20)\nreading fileoptions_20181008.csv\n(5670, 20)\n(861604, 20)\nreading fileoptions_20181009.csv\n(5704, 20)\n(867308, 20)\nreading fileoptions_20181010.csv\n(5704, 20)\n(873012, 20)\nreading fileoptions_20181011.csv\n(6050, 20)\n(879062, 20)\nreading fileoptions_20181012.csv\n(6112, 20)\n(885174, 20)\nreading fileoptions_20181015.csv\n(5750, 20)\n(890924, 20)\nreading fileoptions_20181016.csv\n(5760, 20)\n(896684, 20)\nreading fileoptions_20181017.csv\n(5802, 20)\n(902486, 20)\nreading fileoptions_20181018.csv\n(5810, 20)\n(908296, 20)\nreading fileoptions_20181019.csv\n(5840, 20)\n(914136, 20)\nreading fileoptions_20181022.csv\n(5602, 20)\n(919738, 20)\nreading fileoptions_20181023.csv\n(5712, 20)\n(925450, 20)\nreading fileoptions_20181024.csv\n(5750, 20)\n(931200, 20)\nreading fileoptions_20181025.csv\n(6028, 20)\n(937228, 20)\nreading fileoptions_20181026.csv\n(6080, 20)\n(943308, 20)\nreading fileoptions_20181029.csv\n(5722, 20)\n(949030, 20)\nreading fileoptions_20181030.csv\n(5778, 20)\n(954808, 20)\nreading fileoptions_20181031.csv\n(5778, 20)\n(960586, 20)\nreading fileoptions_20181101.csv\n(6004, 20)\n(966590, 20)\nreading fileoptions_20181102.csv\n(6026, 20)\n(972616, 20)\nreading fileoptions_20181105.csv\n(5642, 20)\n(978258, 20)\nreading fileoptions_20181106.csv\n(5642, 20)\n(983900, 20)\nreading fileoptions_20181107.csv\n(5696, 20)\n(989596, 20)\nreading fileoptions_20181108.csv\n(5928, 20)\n(995524, 20)\nreading fileoptions_20181109.csv\n(5956, 20)\n(1001480, 20)\nreading fileoptions_20181112.csv\n(5684, 20)\n(1007164, 20)\nreading fileoptions_20181113.csv\n(5746, 20)\n(1012910, 20)\nreading fileoptions_20181114.csv\n(5754, 20)\n(1018664, 20)\nreading fileoptions_20181115.csv\n(5880, 20)\n(1024544, 20)\nreading fileoptions_20181116.csv\n(5906, 20)\n(1030450, 20)\nreading fileoptions_20181119.csv\n(5636, 20)\n(1036086, 20)\nreading fileoptions_20181120.csv\n(5698, 20)\n(1041784, 20)\nreading fileoptions_20181121.csv\n(5978, 20)\n(1047762, 20)\nreading fileoptions_20181123.csv\n(5978, 20)\n(1053740, 20)\nreading fileoptions_20181126.csv\n(5638, 20)\n(1059378, 20)\nreading fileoptions_20181127.csv\n(5638, 20)\n(1065016, 20)\nreading fileoptions_20181128.csv\n(5676, 20)\n(1070692, 20)\nreading fileoptions_20181129.csv\n(5878, 20)\n(1076570, 20)\nreading fileoptions_20181130.csv\n(5972, 20)\n(1082542, 20)\nreading fileoptions_20181203.csv\n(5714, 20)\n(1088256, 20)\nreading fileoptions_20181204.csv\n(5716, 20)\n(1093972, 20)\nreading fileoptions_20181206.csv\n(5910, 20)\n(1099882, 20)\nreading fileoptions_20181207.csv\n(6010, 20)\n(1105892, 20)\nreading fileoptions_20181210.csv\n(5766, 20)\n(1111658, 20)\nreading fileoptions_20181211.csv\n(5766, 20)\n(1117424, 20)\nreading fileoptions_20181212.csv\n(5766, 20)\n(1123190, 20)\nreading fileoptions_20181213.csv\n(5962, 20)\n(1129152, 20)\nreading fileoptions_20181214.csv\n(6008, 20)\n(1135160, 20)\nreading fileoptions_20181217.csv\n(5708, 20)\n(1140868, 20)\nreading fileoptions_20181218.csv\n(5718, 20)\n(1146586, 20)\nreading fileoptions_20181219.csv\n(5718, 20)\n(1152304, 20)\nreading fileoptions_20181220.csv\n(5800, 20)\n(1158104, 20)\nreading fileoptions_20181221.csv\n(5800, 20)\n(1163904, 20)\nreading fileoptions_20181224.csv\n(5436, 20)\n(1169340, 20)\nreading fileoptions_20181226.csv\n(5438, 20)\n(1174778, 20)\nreading fileoptions_20181227.csv\n(5632, 20)\n(1180410, 20)\nreading fileoptions_20181228.csv\n(5634, 20)\n(1186044, 20)\nreading fileoptions_20181231.csv\n(5348, 20)\n(1191392, 20)\nreading fileoptions_20190102.csv\n(5348, 20)\n(1196740, 20)\nreading fileoptions_20190103.csv\n(5578, 20)\n(1202318, 20)\nreading fileoptions_20190104.csv\n(5702, 20)\n(1208020, 20)\nreading fileoptions_20190107.csv\n(5456, 20)\n(1213476, 20)\nreading fileoptions_20190108.csv\n(5504, 20)\n(1218980, 20)\nreading fileoptions_20190109.csv\n(5634, 20)\n(1224614, 20)\nreading fileoptions_20190110.csv\n(5956, 20)\n(1230570, 20)\nreading fileoptions_20190111.csv\n(5956, 20)\n(1236526, 20)\nreading fileoptions_20190114.csv\n(5744, 20)\n(1242270, 20)\nreading fileoptions_20190115.csv\n(5838, 20)\n(1248108, 20)\nreading fileoptions_20190116.csv\n(5858, 20)\n(1253966, 20)\nreading fileoptions_20190117.csv\n(5880, 20)\n(1259846, 20)\nreading fileoptions_20190118.csv\n(5902, 20)\n(1265748, 20)\nreading fileoptions_20190121.csv\n(0, 20)\n(1265748, 20)\nreading fileoptions_20190122.csv\n(5446, 20)\n(1271194, 20)\nreading fileoptions_20190123.csv\n(5564, 20)\n(1276758, 20)\nreading fileoptions_20190124.csv\n(5848, 20)\n(1282606, 20)\nreading fileoptions_20190125.csv\n(5848, 20)\n(1288454, 20)\nreading fileoptions_20190128.csv\n(5570, 20)\n(1294024, 20)\nreading fileoptions_20190129.csv\n(5600, 20)\n(1299624, 20)\nreading fileoptions_20190130.csv\n(5600, 20)\n(1305224, 20)\nreading fileoptions_20190131.csv\n(5800, 20)\n(1311024, 20)\nreading fileoptions_20190201.csv\n(5840, 20)\n(1316864, 20)\nreading fileoptions_20190204.csv\n(5556, 20)\n(1322420, 20)\nreading fileoptions_20190205.csv\n(5568, 20)\n(1327988, 20)\nreading fileoptions_20190206.csv\n(5608, 20)\n(1333596, 20)\nreading fileoptions_20190207.csv\n(5842, 20)\n(1339438, 20)\nreading fileoptions_20190208.csv\n(5842, 20)\n(1345280, 20)\nreading fileoptions_20190211.csv\n(5572, 20)\n(1350852, 20)\nreading fileoptions_20190212.csv\n(5582, 20)\n(1356434, 20)\nreading fileoptions_20190213.csv\n(5596, 20)\n(1362030, 20)\nreading fileoptions_20190214.csv\n(5612, 20)\n(1367642, 20)\nreading fileoptions_20190215.csv\n(5612, 20)\n(1373254, 20)\nreading fileoptions_20190219.csv\n(5370, 20)\n(1378624, 20)\nreading fileoptions_20190220.csv\n(5390, 20)\n(1384014, 20)\nreading fileoptions_20190221.csv\n(5570, 20)\n(1389584, 20)\nreading fileoptions_20190222.csv\n(5606, 20)\n(1395190, 20)\nreading fileoptions_20190225.csv\n(5368, 20)\n(1400558, 20)\nreading fileoptions_20190226.csv\n(5440, 20)\n(1405998, 20)\nreading fileoptions_20190227.csv\n(5454, 20)\n(1411452, 20)\nreading fileoptions_20190228.csv\n(5706, 20)\n(1417158, 20)\nreading fileoptions_20190301.csv\n(5800, 20)\n(1422958, 20)\nreading fileoptions_20190304.csv\n(5566, 20)\n(1428524, 20)\nreading fileoptions_20190305.csv\n(5574, 20)\n(1434098, 20)\nreading fileoptions_20190306.csv\n(5584, 20)\n(1439682, 20)\nreading fileoptions_20190307.csv\n(5772, 20)\n(1445454, 20)\nreading fileoptions_20190308.csv\n(5772, 20)\n(1451226, 20)\nreading fileoptions_20190311.csv\n(5578, 20)\n(1456804, 20)\nreading fileoptions_20190312.csv\n(5620, 20)\n(1462424, 20)\nreading fileoptions_20190313.csv\n(5662, 20)\n(1468086, 20)\nreading fileoptions_20190314.csv\n(5748, 20)\n(1473834, 20)\nreading fileoptions_20190315.csv\n(5808, 20)\n(1479642, 20)\nreading fileoptions_20190318.csv\n(5414, 20)\n(1485056, 20)\nreading fileoptions_20190319.csv\n(5414, 20)\n(1490470, 20)\nreading fileoptions_20190320.csv\n(5454, 20)\n(1495924, 20)\nreading fileoptions_20190321.csv\n(5664, 20)\n(1501588, 20)\nreading fileoptions_20190322.csv\n(5730, 20)\n(1507318, 20)\nreading fileoptions_20190325.csv\n(5544, 20)\n(1512862, 20)\nreading fileoptions_20190326.csv\n(5606, 20)\n(1518468, 20)\nreading fileoptions_20190327.csv\n(5618, 20)\n(1524086, 20)\nreading fileoptions_20190328.csv\n(5798, 20)\n(1529884, 20)\nreading fileoptions_20190329.csv\n(5800, 20)\n(1535684, 20)\nreading fileoptions_20190401.csv\n(5534, 20)\n(1541218, 20)\nreading fileoptions_20190402.csv\n(5552, 20)\n(1546770, 20)\nreading fileoptions_20190403.csv\n(5552, 20)\n(1552322, 20)\nreading fileoptions_20190404.csv\n(5808, 20)\n(1558130, 20)\nreading fileoptions_20190405.csv\n(5844, 20)\n(1563974, 20)\nreading fileoptions_20190408.csv\n(5576, 20)\n(1569550, 20)\nreading fileoptions_20190409.csv\n(5576, 20)\n(1575126, 20)\nreading fileoptions_20190410.csv\n(5580, 20)\n(1580706, 20)\nreading fileoptions_20190411.csv\n(5774, 20)\n(1586480, 20)\nreading fileoptions_20190412.csv\n(5782, 20)\n(1592262, 20)\nreading fileoptions_20190415.csv\n(5544, 20)\n(1597806, 20)\nreading fileoptions_20190416.csv\n(5556, 20)\n(1603362, 20)\nreading fileoptions_20190417.csv\n(5556, 20)\n(1608918, 20)\nreading fileoptions_20190418.csv\n(5578, 20)\n(1614496, 20)\nreading fileoptions_20190422.csv\n(5324, 20)\n(1619820, 20)\nreading fileoptions_20190423.csv\n(5544, 20)\n(1625364, 20)\nreading fileoptions_20190424.csv\n(5564, 20)\n(1630928, 20)\nreading fileoptions_20190425.csv\n(5776, 20)\n(1636704, 20)\nreading fileoptions_20190426.csv\n(5788, 20)\n(1642492, 20)\nreading fileoptions_20190429.csv\n(5540, 20)\n(1648032, 20)\nreading fileoptions_20190430.csv\n(5570, 20)\n(1653602, 20)\nreading fileoptions_20190501.csv\n(5570, 20)\n(1659172, 20)\nreading fileoptions_20190502.csv\n(5774, 20)\n(1664946, 20)\nreading fileoptions_20190503.csv\n(5804, 20)\n(1670750, 20)\nreading fileoptions_20190506.csv\n(5622, 20)\n(1676372, 20)\nreading fileoptions_20190507.csv\n(5656, 20)\n(1682028, 20)\nreading fileoptions_20190508.csv\n(5656, 20)\n(1687684, 20)\nreading fileoptions_20190509.csv\n(5862, 20)\n(1693546, 20)\nreading fileoptions_20190510.csv\n(5864, 20)\n(1699410, 20)\nreading fileoptions_20190513.csv\n(5628, 20)\n(1705038, 20)\nreading fileoptions_20190514.csv\n(5640, 20)\n(1710678, 20)\nreading fileoptions_20190515.csv\n(5672, 20)\n(1716350, 20)\nreading fileoptions_20190516.csv\n(5692, 20)\n(1722042, 20)\nreading fileoptions_20190517.csv\n(5710, 20)\n(1727752, 20)\nreading fileoptions_20190520.csv\n(5486, 20)\n(1733238, 20)\nreading fileoptions_20190521.csv\n(5522, 20)\n(1738760, 20)\nreading fileoptions_20190522.csv\n(5544, 20)\n(1744304, 20)\nreading fileoptions_20190523.csv\n(5706, 20)\n(1750010, 20)\nreading fileoptions_20190524.csv\n(5730, 20)\n(1755740, 20)\nreading fileoptions_20190528.csv\n(5522, 20)\n(1761262, 20)\nreading fileoptions_20190529.csv\n(5550, 20)\n(1766812, 20)\nreading fileoptions_20190530.csv\n(5730, 20)\n(1772542, 20)\nreading fileoptions_20190531.csv\n(5744, 20)\n(1778286, 20)\nreading fileoptions_20190603.csv\n(5592, 20)\n(1783878, 20)\nreading fileoptions_20190604.csv\n(5674, 20)\n(1789552, 20)\nreading fileoptions_20190605.csv\n(5698, 20)\n(1795250, 20)\nreading fileoptions_20190606.csv\n(5942, 20)\n(1801192, 20)\nreading fileoptions_20190607.csv\n(6006, 20)\n(1807198, 20)\nreading fileoptions_20190610.csv\n(5902, 20)\n(1813100, 20)\nreading fileoptions_20190611.csv\n(6016, 20)\n(1819116, 20)\nreading fileoptions_20190612.csv\n(6032, 20)\n(1825148, 20)\nreading fileoptions_20190613.csv\n(6220, 20)\n(1831368, 20)\nreading fileoptions_20190614.csv\n(6276, 20)\n(1837644, 20)\nreading fileoptions_20190617.csv\n(6006, 20)\n(1843650, 20)\nreading fileoptions_20190618.csv\n(6006, 20)\n(1849656, 20)\nreading fileoptions_20190619.csv\n(6062, 20)\n(1855718, 20)\nreading fileoptions_20190620.csv\n(6084, 20)\n(1861802, 20)\nreading fileoptions_20190621.csv\n(6124, 20)\n(1867926, 20)\nreading fileoptions_20190624.csv\n(5694, 20)\n(1873620, 20)\nreading fileoptions_20190625.csv\n(5694, 20)\n(1879314, 20)\nreading fileoptions_20190626.csv\n(5772, 20)\n(1885086, 20)\nreading fileoptions_20190627.csv\n(5986, 20)\n(1891072, 20)\nreading fileoptions_20190628.csv\n(6006, 20)\n(1897078, 20)\nreading fileoptions_20190701.csv\n(5764, 20)\n(1902842, 20)\nreading fileoptions_20190702.csv\n(5764, 20)\n(1908606, 20)\nreading fileoptions_20190703.csv\n(5958, 20)\n(1914564, 20)\nreading fileoptions_20190705.csv\n(5958, 20)\n(1920522, 20)\nreading fileoptions_20190708.csv\n(5716, 20)\n(1926238, 20)\nreading fileoptions_20190709.csv\n(5758, 20)\n(1931996, 20)\nreading fileoptions_20190710.csv\n(5758, 20)\n(1937754, 20)\nreading fileoptions_20190711.csv\n(5980, 20)\n(1943734, 20)\nreading fileoptions_20190712.csv\n(6010, 20)\n(1949744, 20)\nreading fileoptions_20190715.csv\n(5782, 20)\n(1955526, 20)\nreading fileoptions_20190716.csv\n(5784, 20)\n(1961310, 20)\nreading fileoptions_20190717.csv\n(5784, 20)\n(1967094, 20)\nreading fileoptions_20190718.csv\n(5790, 20)\n(1972884, 20)\nreading fileoptions_20190719.csv\n(5790, 20)\n(1978674, 20)\nreading fileoptions_20190722.csv\n(5692, 20)\n(1984366, 20)\nreading fileoptions_20190723.csv\n(5694, 20)\n(1990060, 20)\nreading fileoptions_20190724.csv\n(5808, 20)\n(1995868, 20)\nreading fileoptions_20190725.csv\n(6056, 20)\n(2001924, 20)\nreading fileoptions_20190726.csv\n(6058, 20)\n(2007982, 20)\nreading fileoptions_20190729.csv\n(5926, 20)\n(2013908, 20)\nreading fileoptions_20190730.csv\n(6024, 20)\n(2019932, 20)\nreading fileoptions_20190731.csv\n(6138, 20)\n(2026070, 20)\n"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.to_csv(data_path+'/option_data_NASDAQ.csv',index = False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5900f54a6cd5e853779f94049f4a422add0ed2 | 143,463 | ipynb | Jupyter Notebook | Tutorial1_OMIB.ipynb | Energy-MAC/LITS-Examples | 46b4e2dfb5339b4c1f15f83ae25760ea8fc0db73 | [
"BSD-3-Clause"
] | 4 | 2020-02-06T20:05:25.000Z | 2020-09-01T14:03:44.000Z | Tutorial1_OMIB.ipynb | Energy-MAC/LITS-Examples | 46b4e2dfb5339b4c1f15f83ae25760ea8fc0db73 | [
"BSD-3-Clause"
] | 6 | 2020-02-06T00:14:55.000Z | 2020-07-17T01:36:37.000Z | Tutorial1_OMIB.ipynb | Energy-MAC/LITS-Examples | 46b4e2dfb5339b4c1f15f83ae25760ea8fc0db73 | [
"BSD-3-Clause"
] | 3 | 2019-10-30T00:19:51.000Z | 2021-02-03T08:40:54.000Z | 133.329926 | 27,194 | 0.677861 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb592a0fd033a2517c8cc660f915cb87f66e1b81 | 20,607 | ipynb | Jupyter Notebook | doc/source/examples/networktheory/Transmission Line Properties and Manipulations.ipynb | dylanfinestone/scikit-rf | c8501ad6d7a09e7bc79eb6df87b9e2d28868fca5 | [
"BSD-3-Clause"
] | 379 | 2015-01-25T12:19:19.000Z | 2022-03-29T14:01:07.000Z | doc/source/examples/networktheory/Transmission Line Properties and Manipulations.ipynb | Lansus-Filter/scikit-rf | 3bb6cd24cad0d5d1ab291812c64707f8432ea391 | [
"BSD-3-Clause"
] | 456 | 2015-01-06T19:15:55.000Z | 2022-03-31T06:42:57.000Z | doc/source/examples/networktheory/Transmission Line Properties and Manipulations.ipynb | Lansus-Filter/scikit-rf | 3bb6cd24cad0d5d1ab291812c64707f8432ea391 | [
"BSD-3-Clause"
] | 211 | 2015-01-06T17:14:06.000Z | 2022-03-31T01:36:00.000Z | 28.700557 | 475 | 0.582132 | [
[
[
"# Modeling Transmission Line Properties\n## Table of Contents\n* [Introduction](#introduction)\n * [Propagation constant](#propagation_constant)\n * [Interlude on attenuation units](#attenuation_units)\n* [Modeling a loaded lossy transmission line using transmission line functions](#tline_functions)\n * [Input impedances, reflection coefficients and SWR](#tline_impedances)\n * [Voltages and Currents](#voltages_currents)\n* [Modeling a loaded lossy transmission line by cascading Networks](#cascading_networks)\n* [Determination of the propagation constant from the input impedance](#propagation_constant_from_zin)\n\n\n## Introduction <a class=\"anchor\" id=\"introduction\"></a>\nIn this tutorial, `scikit-rf` is used to work with some classical transmission line situations, such as calculating impedances, reflection coefficients, standing wave ratios or voltages and currents. There is at least two way of performing these calculations, one using [transmission line functions](#tline_functions) or by [creating and cascading Networks](#cascading_networks)\n\nLet's consider a lossy coaxial cable of characteristic impedance $Z_0=75 \\Omega$ of length $d=12 m$. The coaxial cable has an attenuation of 0.02 Neper/m and a [velocity factor](https://en.wikipedia.org/wiki/Velocity_factor) VF=0.67 (This corresponds roughly to a [RG-6](https://en.wikipedia.org/wiki/RG-6) coaxial). The cable is loaded with a $Z_L=150 \\Omega$ impedance. The RF frequency of interest is 250 MHz. \n\nPlease note that in `scikit-rf`, the line length is defined from the load, ie $z=0$ at the load and $z=d$ at the input of the transmission line:\n<img src=\"transmission_line_properties.svg\">\n\n\nFirst, let's make the necessary Python import statements:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport skrf as rf \nfrom pylab import * ",
"_____no_output_____"
],
[
"# skrf figure styling\nrf.stylely()",
"_____no_output_____"
]
],
[
[
"And the constants of the problem:",
"_____no_output_____"
]
],
[
[
"freq = rf.Frequency(250, npoints=1, unit='MHz')\nZ_0 = 75 # Ohm\nZ_L = 150 # Ohm\nd = 12 # m\nVF = 0.67\natt = 0.02 # Np/m. Equivalent to 0.1737 dB/m",
"_____no_output_____"
]
],
[
[
"Before going into impedance and reflection coefficient calculations, first we need to define the transmission line properties, in particular its propagation constant.",
"_____no_output_____"
],
[
"### Propagation constant <a class=\"anchor\" id=\"propagation_constant\"></a>\nIn order to get the RF parameters of the transmission line, it is necessary to derive the propagation constant of the line. The propagation constant $\\gamma$ of the line is defined in `scikit-rf` as $\\gamma=\\alpha + j\\beta$ where $\\alpha$ is the attenuation (in Neper/m) and $\\beta=\\frac{2\\pi}{\\lambda}=\\frac{\\omega}{c}/\\mathrm{VF}=\\frac{\\omega}{c}\\sqrt{\\epsilon_r}$ the phase constant.\n\nFirst, the wavelength in the coaxial cable is $$\\lambda=\\frac{c}{f \\sqrt{\\epsilon_r}}=\\frac{c}{f} \\mathrm{VF} $$ ",
"_____no_output_____"
]
],
[
[
"lambd = rf.c/freq.f * VF\nprint('VF=', VF, 'and Wavelength:', lambd, 'm')",
"_____no_output_____"
]
],
[
[
"As the attenuation is already given in Np/m, the propagation constant is:",
"_____no_output_____"
]
],
[
[
"alpha = att # Np/m !\nbeta = freq.w/rf.c/VF\ngamma = alpha + 1j*beta\nprint('Transmission line propagation constant: gamma = ', gamma, 'rad/m')",
"_____no_output_____"
]
],
[
[
"If the attenuation would have been given in other units, `scikit-rf` provides the necessary tools to convert units, as described below.",
"_____no_output_____"
],
[
"### Interlude: On Attenuation Units <a class=\"anchor\" id=\"attenuation_units\"></a>",
"_____no_output_____"
],
[
"Attenuation is generally provided (or expected) in various kind of units. `scikit-rf` provides convenience functions to manipulate line attenuation units. \n\nFor example, the cable attenuation given in Np/m, can be expressed in dB:",
"_____no_output_____"
]
],
[
[
"print('Attenuation dB/m:', rf.np_2_db(att))",
"_____no_output_____"
]
],
[
[
"Hence, the attenuation in dB/100m is:",
"_____no_output_____"
]
],
[
[
"print('Line attenuation in dB/100m:', rf.np_2_db(att)*100)",
"_____no_output_____"
]
],
[
[
"And in dB/100feet is:",
"_____no_output_____"
]
],
[
[
"print('Line attenuation in dB/100ft:', rf.np_2_db(att)*100*rf.feet_2_meter())",
"_____no_output_____"
]
],
[
[
"If the attenuation would have been given in imperial units, such as dB/100ft, the opposite conversions would have been: ",
"_____no_output_____"
]
],
[
[
"rf.db_per_100feet_2_db_per_100meter(5.2949) # to dB/100m",
"_____no_output_____"
],
[
"rf.db_2_np(5.2949)/rf.feet_2_meter(100) # to Np/m",
"_____no_output_____"
]
],
[
[
"## Using transmission line functions <a class=\"anchor\" id=\"tline_functions\"></a>\n`scikit-rf` brings few convenient functions to deal with transmission lines. They are detailed in the [transmission line functions](https://scikit-rf.readthedocs.io/en/latest/api/tlineFunctions.html) documentation pages. \n\n### Input impedances, reflection coefficients and SWR <a class=\"anchor\" id=\"tline_impedances\"></a>\nThe reflection coefficient $\\Gamma_L$ induced by the load is given by `zl_2_Gamma0()`:",
"_____no_output_____"
]
],
[
[
"Gamma0 = rf.zl_2_Gamma0(Z_0, Z_L)\nprint('|Gamma0|=', abs(Gamma0))",
"_____no_output_____"
]
],
[
[
"and its associated Standing Wave Ratio (SWR) is obtained from `zl_2_swr()`:",
"_____no_output_____"
]
],
[
[
"rf.zl_2_swr(Z_0, Z_L)",
"_____no_output_____"
]
],
[
[
"After propagating by a distance $d$ in the transmission line of propagation constant $\\gamma$ (hence having travelled an electrical length $\\theta=\\gamma d$), the reflection coefficient at the line input is obtained from `zl_2_Gamma_in()`:",
"_____no_output_____"
]
],
[
[
"Gamma_in = rf.zl_2_Gamma_in(Z_0, Z_L, theta=gamma*d)\nprint('|Gamma_in|=', abs(Gamma_in), 'phase=', 180/rf.pi*angle(Gamma_in))",
"_____no_output_____"
]
],
[
[
"The input impedance $Z_{in}$ from `zl_2_zin()`:",
"_____no_output_____"
]
],
[
[
"Z_in = rf.zl_2_zin(Z_0, Z_L, gamma * d)\nprint('Input impedance Z_in=', Z_in)",
"_____no_output_____"
]
],
[
[
"Like previously, the SWR at the line input is:",
"_____no_output_____"
]
],
[
[
"rf.zl_2_swr(Z_0, Z_in)",
"_____no_output_____"
]
],
[
[
"The total line loss in dB is get from `zl_2_total_loss()`:",
"_____no_output_____"
]
],
[
[
"rf.mag_2_db10(rf.zl_2_total_loss(Z_0, Z_L, gamma*d))",
"_____no_output_____"
]
],
[
[
"### Voltages and Currents <a class=\"anchor\" id=\"voltages_currents\"></a>",
"_____no_output_____"
],
[
"Now assume that the previous circuit is excited by a source delivering a voltage $V=1 V$ associated to a source impedance $Z_s=100\\Omega$ :\n<img src=\"transmission_line_properties_vi.svg\">",
"_____no_output_____"
]
],
[
[
"Z_s = 100 # Ohm\nV_s = 1 # V",
"_____no_output_____"
]
],
[
[
"At the input of the transmission line, the voltage is a voltage divider circuit:\n$$\nV_{in} = V_s \\frac{Z_{in}}{Z_s + Z_{in}}\n$$",
"_____no_output_____"
]
],
[
[
"V_in = V_s * Z_in / (Z_s + Z_in)\nprint('Voltage at transmission line input : V_in = ', V_in, ' V')",
"_____no_output_____"
]
],
[
[
"and the current at the input of the transmission line is:\n$$\nI_{in} = \\frac{V_s}{Z_s + Z_{in}}\n$$",
"_____no_output_____"
]
],
[
[
"I_in = V_s / (Z_s + Z_in)\nprint('Current at transmission line input : I_in = ', I_in, ' A')",
"_____no_output_____"
]
],
[
[
"which represent a power of \n$$\nP_{in} = \\frac{1}{2} \\Re \\left[V_{in} I_{in}^* \\right]\n$$",
"_____no_output_____"
]
],
[
[
"P_in = 1/2 * real(V_in * conj(I_in))\nprint('Input Power : P_in = ', P_in, 'W')",
"_____no_output_____"
]
],
[
[
"The reflected power is:\n$$\nP_r = |\\Gamma_{in}|^2 P_{in}\n$$",
"_____no_output_____"
]
],
[
[
"P_r = abs(Gamma_in)**2 * P_in\nprint('Reflected power : P_r = ', P_r, 'W')",
"_____no_output_____"
]
],
[
[
"The voltage and current at the load can be deduced from the ABCD parameters of the line of length $L$ :",
"_____no_output_____"
]
],
[
[
"V_out, I_out = rf.voltage_current_propagation(V_in, I_in, Z_0,theta= gamma*d)\nprint('Voltage at load: V_out = ', V_out, 'V')\nprint('Current at load: I_out = ', I_out, 'A')",
"_____no_output_____"
]
],
[
[
"Note that voltages and currents are expressed a peak values. RMS values are thus:\n",
"_____no_output_____"
]
],
[
[
"print(abs(V_out)/sqrt(2), abs(I_out)/sqrt(2))",
"_____no_output_____"
]
],
[
[
"The power delivered to the load is thus:",
"_____no_output_____"
]
],
[
[
"P_out = 1/2 * real(V_out * conj(I_out))\nprint('Power delivered to the load : P_out = ', P_out, ' W')",
"_____no_output_____"
]
],
[
[
"Voltage and current are plotted below against the transmission line length (pay attention to the sign of $d$ in the voltage and current propagation: as we go from source ($z=d$) to the load ($z=0$), $\\theta$ goes in the opposite direction and should be inversed)",
"_____no_output_____"
]
],
[
[
"ds = linspace(0, d, num=1001)\n\nthetas = - gamma*ds \n\nv1 = np.full_like(ds, V_in)\ni1 = np.full_like(ds, I_in)\n\nv2, i2 = rf.voltage_current_propagation(v1, i1, Z_0, thetas)",
"_____no_output_____"
],
[
"fig, (ax_V, ax_I) = plt.subplots(2, 1, sharex=True)\nax_V.plot(ds, abs(v2), lw=2)\nax_I.plot(ds, abs(i2), lw=2, c='C1')\nax_I.set_xlabel('z [m]')\nax_V.set_ylabel('|V| [V]')\nax_I.set_ylabel('|I| [A]')\n\n\nax_V.axvline(0, c='k', lw=5)\nax_I.axvline(0, c='k', lw=5)\nax_V.text(d-2, 0.4, 'input')\nax_V.text(1, 0.6, 'load')\nax_V.axvline(d, c='k', lw=5)\nax_I.axvline(d, c='k', lw=5)\n\nax_I.set_title('Current')\nax_V.set_title('Voltage')",
"_____no_output_____"
]
],
[
[
"## Using `media` objects for transmission line calculations <a class=\"anchor\" id=\"cascading_networks\"></a>",
"_____no_output_____"
],
[
"`scikit-rf` also provides objects representing transmission line mediums. The `Media` object provides generic methods to produce Network’s for any transmission line medium, such as transmission line length (`line()`), lumped components (`resistor()`, `capacitor()`, `inductor()`, `shunt()`, etc.) or terminations (`open()`, `short()`, `load()`). For additional references, please see the [media documentation](https://scikit-rf.readthedocs.io/en/latest/api/media/). \n\nLet's create a transmission line `media` object for our coaxial line of characteristic impedance $Z_0$ and propagation constant $\\gamma$:",
"_____no_output_____"
]
],
[
[
"# if not passing the gamma parameter, it would assume that gamma = alpha + j*beta = 0 + j*1\ncoax_line = rf.media.DefinedGammaZ0(frequency=freq, Z0=Z_0, gamma=gamma)",
"_____no_output_____"
]
],
[
[
"In order to build the circuit illustrated by the figure above, all the circuit's Networks are created and then [cascaded](https://scikit-rf.readthedocs.io/en/latest/tutorials/Networks.html#Cascading-and-De-embedding) with the `**` operator: \n\n<img src=\"transmission_line_properties_networks.svg\">\n\n * [transmission line](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.line.html) of length $d$ (from the media created above), \n * a [resistor](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.resistor.html) of impedance $Z_L$, \n * then terminated by a [short](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.short.html). \n\nThis results in a one-port network, which $Z$-parameter is then the input impedance: ",
"_____no_output_____"
]
],
[
[
"ntw = coax_line.line(d, unit='m') ** coax_line.resistor(Z_L) ** coax_line.short()\nntw.z",
"_____no_output_____"
]
],
[
[
"Note that full Network can also be built with convenience functions [load](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.load.html):",
"_____no_output_____"
]
],
[
[
"ntw = coax_line.line(d, unit='m') ** coax_line.load(rf.zl_2_Gamma0(Z_0, Z_L))\nntw.z",
"_____no_output_____"
]
],
[
[
"or even more directly using or [delay_load](https://scikit-rf.readthedocs.io/en/latest/api/media/generated/skrf.media.media.Media.delay_load.html):",
"_____no_output_____"
]
],
[
[
"ntw = coax_line.delay_load(rf.zl_2_Gamma0(Z_0, Z_L), d, unit='m')\nntw.z",
"_____no_output_____"
]
],
[
[
"## Determination of the propagation constant from the input impedance <a class=\"anchor\" id=\"propagation_constant_from_zin\"></a>\nLet's assume the input impedance of a short‐circuited lossy transmission line of length d=1.5 m and a characteristic impedance of $Z_0=$100 Ohm has been measured to $Z_{in}=40 - 280j \\Omega$. \n\n<img src=\"transmission_line_properties_propagation_constant.svg\">\n\nThe transmission line propagation constant $\\gamma$ is unknown and researched. Let see how to deduce its value using `scikit-rf`:",
"_____no_output_____"
]
],
[
[
"# input data\nz_in = 20 - 140j\nz_0 = 75\nd = 1.5\nGamma_load = -1 # short",
"_____no_output_____"
]
],
[
[
"Since we know the input impedance, we can deduce the reflection coefficient at the input of the transmission line. Since there is a direction relationship between the reflection coefficient at the load and at the input of the line:\n\n$$\n\\Gamma_{in} = \\Gamma_L e^{- 2 \\gamma d}\n$$\n\nwe can deduce the propagation constant value $\\gamma$ as:\n$$\n\\gamma = -\\frac{1}{2d} \\ln \\left( \\frac{\\Gamma_{in}}{\\Gamma_l} \\right)\n$$\n\nThis is what the convenience function `reflection_coefficient_2_propagation_constant` is doing:",
"_____no_output_____"
]
],
[
[
"# reflection coefficient at input\nGamma_in = rf.zl_2_Gamma0(z_0, z_in)\n# line propagation constant\ngamma = rf.reflection_coefficient_2_propagation_constant(Gamma_in, Gamma_load, d)\nprint('Line propagation constant, gamma =', gamma, 'rad/m')",
"_____no_output_____"
]
],
[
[
"One can check the consistency of the result by making the reverse calculation: the input impedance at a distance $d$ from the load $Z_l$:",
"_____no_output_____"
]
],
[
[
"rf.zl_2_zin(z_0, zl=0, theta=gamma * d)",
"_____no_output_____"
]
],
[
[
"Which was indeed the value given as input of the example.",
"_____no_output_____"
],
[
"Now that the line propagation constant has been determined, one can replace the short by a load resistance:",
"_____no_output_____"
]
],
[
[
"rf.zl_2_zin(z_0, zl=50+50j, theta=gamma * d)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
cb59302a8d3f47f1294bdce916d5c34c1cc2d666 | 9,060 | ipynb | Jupyter Notebook | Sesiones/Ejemplos Sesion 6/Ejemplos Modulos/Sesion 06-2.ipynb | FrancoLorenzo/Python | fbd99380c75b3370171d5cd820cdffbdb73d9ab6 | [
"MIT"
] | null | null | null | Sesiones/Ejemplos Sesion 6/Ejemplos Modulos/Sesion 06-2.ipynb | FrancoLorenzo/Python | fbd99380c75b3370171d5cd820cdffbdb73d9ab6 | [
"MIT"
] | null | null | null | Sesiones/Ejemplos Sesion 6/Ejemplos Modulos/Sesion 06-2.ipynb | FrancoLorenzo/Python | fbd99380c75b3370171d5cd820cdffbdb73d9ab6 | [
"MIT"
] | null | null | null | 21.990291 | 251 | 0.478587 | [
[
[
"# Modulos",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
],
[
"os.getcwd()",
"_____no_output_____"
],
[
"os.makedirs('C:\\\\Users\\\\Maricel\\\\Desktop\\\\Python-Basico\\\\Sesion 06\\\\PruebaOS')",
"_____no_output_____"
],
[
"path = 'C:\\\\Users\\\\Maricel\\\\Desktop\\\\Python-Basico\\\\Sesion 06\\\\PruebaOS\\\\setup.py'",
"_____no_output_____"
],
[
"os.path.basename(path)",
"_____no_output_____"
],
[
"os.path.dirname(path)",
"_____no_output_____"
],
[
"os.path.getsize(path)",
"_____no_output_____"
],
[
"print(os.path.exists('C:\\\\Windows'))\nprint(os.path.exists('C:\\\\some_made_up_folder'))\nprint(os.path.isdir('C:\\\\Windows\\\\System32'))\nprint(os.path.isfile('C:\\\\Windows\\\\System32'))\nprint(os.path.isdir('C:\\\\Windows\\\\System32\\\\calc.exe'))\nprint(os.path.isfile('C:\\\\Windows\\\\System32\\\\calc.exe'))",
"True\nFalse\nTrue\nFalse\nFalse\nTrue\n"
],
[
"import sys",
"_____no_output_____"
],
[
"print(sys.argv)",
"['C:\\\\Users\\\\Maricel\\\\anaconda3\\\\envs\\\\Python-Basico\\\\lib\\\\site-packages\\\\ipykernel_launcher.py', '-f', 'C:\\\\Users\\\\Maricel\\\\AppData\\\\Roaming\\\\jupyter\\\\runtime\\\\kernel-db247ec3-ee09-4d5f-9220-45a879423521.json']\n"
],
[
"sys.path",
"_____no_output_____"
],
[
"sys.version",
"_____no_output_____"
],
[
"import webbrowser",
"_____no_output_____"
],
[
"webbrowser.open(\"http://www.python.org\", new=2, autoraise=True)",
"_____no_output_____"
],
[
"webbrowser.open_new(\"https://pypi.python.org/pypi\")",
"_____no_output_____"
],
[
"webbrowser.open_new_tab(\"https://www.python.org/psf-landing/\")",
"_____no_output_____"
],
[
"comando = \"/usr/bin/firefox %s\"\nnav3 = webbrowser.get(comando)\nwebbrowser.register(\"navegador\", None, nav3)\nwebbrowser.get(\"navegador\").open(\"http://www.python.org\")",
"_____no_output_____"
],
[
"browser = None\nbrowsers = (\"firefox\", \"opera\", \"mosaic\", None)\nfor b in browsers:\n try:\n browser = webbrowser.get(b)\n except webbrowser.Error:\n if b is None:\n print(\"No hay navegador registrado.\")\n else:\n print(\"No se ha encontrado '%s'.\" % b)\n else:\n if b is None:\n print(\"Navegador por defecto.\")\n else:\n print(\"Navegador '%s'.\" % b)",
"No se ha encontrado 'firefox'.\nNo se ha encontrado 'opera'.\nNo se ha encontrado 'mosaic'.\nNavegador por defecto.\n"
],
[
"webbrowser.open_new_tab(\"http://www.recursospython.com/\")\n# Abrir una nueva ventana en Chrome\ntry:\n webbrowser.get(\"chrome\").open_new(\"http://www.recursospython.com/\")\nexcept webbrowser.Error:\n print(\"No se ha encontrado Chrome.\")",
"No se ha encontrado Chrome.\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb59413ccd099912a01633f36efb8c256627a0df | 47,941 | ipynb | Jupyter Notebook | 10 Selenium, Projekte/02 Arbeit mit Selenium-Copy1_Arbeitskopie.ipynb | edzardschade/Edzard_CAS | 42c5db16e506e1fd8653d49e7509b8d2b59353e7 | [
"MIT"
] | null | null | null | 10 Selenium, Projekte/02 Arbeit mit Selenium-Copy1_Arbeitskopie.ipynb | edzardschade/Edzard_CAS | 42c5db16e506e1fd8653d49e7509b8d2b59353e7 | [
"MIT"
] | null | null | null | 10 Selenium, Projekte/02 Arbeit mit Selenium-Copy1_Arbeitskopie.ipynb | edzardschade/Edzard_CAS | 42c5db16e506e1fd8653d49e7509b8d2b59353e7 | [
"MIT"
] | null | null | null | 39.686258 | 1,599 | 0.477754 | [
[
[
"# Arbeit mit Selenium_Arbeitskopie",
"_____no_output_____"
],
[
"Die Arbeit mit Selenium erfordert etwas Übung. Aber der Zeitaufwand lohnt sich. Es gibt mit Selenium kaum ein Webdienst der nicht scrapbar wird. Beginnen wir aber wie üblich mit der Dokumentation. Sie ist im Falle von Selenium sehr hilfreich. Ihr findet [sie hier](http://selenium-python.readthedocs.io/). Und [hier](http://selenium-python.readthedocs.io/locating-elements.html). ",
"_____no_output_____"
],
[
"Um Selenium kennenzulernen, gehen wir zurück zu unserem Beispiel der Lehren: https://www.berufsberatung.ch/dyn/show/2930. Nun wollen wir keine URLs generieren, um unsere Inhalte zu finden. Wir wollen stattdessen mit der Site interagieren. So sollten wir alle Einträge bekommen. BeautifulSoup werden wir trotzdem noch dazu nehmen. Denn Selenium liest keine Inhalte aus. Die Library lässt uns einfach mit dem Webdienst interagieren.",
"_____no_output_____"
],
[
"Beginnen wir mit den Imports",
"_____no_output_____"
]
],
[
[
"from bs4 import BeautifulSoup\nimport requests\nimport time # damit kann z.B. den Browser verlangsamen, damit man nicht sofort als Mschiner erkennbar wird.\nimport datetime # braucht es jetzt nicht.\nimport pandas as pd\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys",
"_____no_output_____"
]
],
[
[
"Und dann schicken wir Selenium auf die Seite.",
"_____no_output_____"
]
],
[
[
"#Wir starten den Browser:\ndriver = webdriver.Chrome('/usr/local/bin/chromedriver')\n#Wir besuchen die Site\ndriver.get(\"https://www.berufsberatung.ch/dyn/show/2930\") # so nenne ich jetzt den Browser",
"_____no_output_____"
]
],
[
[
"Nun suchen wir mit dem Inspector die Elemente, die wir ansteuern wollen.",
"_____no_output_____"
]
],
[
[
"driver.find_element_by_class_name(\"fs-autocomplete-trigger\").click()",
"_____no_output_____"
],
[
"driver.find_element_by_id(\"sw_453\").click()",
"_____no_output_____"
],
[
"driver.find_element_by_id(\"uxfs-action\").click()",
"_____no_output_____"
],
[
"test = driver.page_source",
"_____no_output_____"
],
[
"type(test) # zeigt mir zur Orientierung, in welcher Form das Objekt vorlieg",
"_____no_output_____"
],
[
"# Wir öffnen eine Datei zum Schreiben (\"w\": write)\nfile = open(\"lehrstellen.html\", \"w\")\nfile.write(test)\nfile.close()\n# Abschließend müssen wir die Datei wieder schließen",
"_____no_output_____"
]
],
[
[
"Aber wir wollen alle Ergänzen. Holen wir deshalb die Nummern nochmals",
"_____no_output_____"
]
],
[
[
"r = requests.get('https://www.berufsberatung.ch/dyn/show/2930') #Seite suchen\nsoup = BeautifulSoup(r.text, 'html.parser') # \nids = [] \nfor elem in soup.find('ul',{'class':'ui-autocomplete full-list '}).find_all('a'):\n #ich könnte weitere find oder find_all - Befehle anfügen und damit immer weiter meine \n #Suche präzisieren. Hier sind wir aber bereits so auf der Listenreihen und können das\n #in der for-Schlaufe abfregen\n elem = \"sw_\" + elem['data-id']\n ids.append(elem)",
"_____no_output_____"
],
[
"len(ids)",
"_____no_output_____"
],
[
"ids[:5]",
"_____no_output_____"
]
],
[
[
"Testen wir es mit den ersten fünf Einträgen",
"_____no_output_____"
]
],
[
[
"for elem in ids[:5]:\n print(elem)\n time.sleep(.5) #damit es nicht zu schnell geht\n driver.find_element_by_class_name(\"fs-autocomplete-trigger\").click()\n time.sleep(.5)\n driver.find_element_by_id(elem).click()",
"sw_453\nsw_452\nsw_55\nsw_7617\nsw_7618\n"
],
[
"#mit der obigen Abfrage werden aber nur die ersten 230 Berufe abgefragt\ndriver.find_element_by_id(\"uxfs-action\").click()",
"_____no_output_____"
]
],
[
[
"Zeigen wird alle Ergebnisse an",
"_____no_output_____"
]
],
[
[
"driver.find_element_by_id(\"aSearchPaging\").click()",
"_____no_output_____"
]
],
[
[
"Speichern wir die Ergebnisse ab",
"_____no_output_____"
]
],
[
[
"text = driver.page_source",
"_____no_output_____"
],
[
"def lehrstellen(html): #wir bauen die Funktion \"lehrstellen\"\n \n soup = BeautifulSoup(html, 'html.parser') #wir veraubeiten dea Objekt mit BS\n # \\ ermöglicht Umbruch\n ortsliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell table-col-3'})\n #mit find bzw. find_all suchen wir die entsprechenden Elemente\n firmenliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell bold company data-id table-col-1'})\n \n jahresliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell float-left-for-sd table-col-4'})\n \n anzahlliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell text-align-center float-left-for-sd table-col-5'})\n \n lst = []\n #kjetzt bauen wir die vier Variablen der Liste lst\n for ort, firma, jahr, anzahl in zip(ortsliste,firmenliste,jahresliste, anzahlliste):\n \n mini_dict = {'Ort':ort.text, #gebe es immer als text aus\n 'Firma':firma.text,\n 'Jahr':jahr.text,\n 'Anzahl':int(anzahl.text.replace(' Lehrstelle(n)\\n','').replace('\\n',''))}\n lst.append(mini_dict)\n \n return lst",
"_____no_output_____"
],
[
"lehrstellen(text)",
"_____no_output_____"
],
[
"pd.DataFrame(lehrstellen(text))",
"_____no_output_____"
]
],
[
[
"## Bringen wir alles zusammen",
"_____no_output_____"
]
],
[
[
"#Funktion, um nur die Informationen herauszuziehen, die uns interessieren\ndef lehrstellen(html):\n \n soup = BeautifulSoup(html, 'lxml')\n try:\n ortsliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell table-col-3'})\n \n firmenliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell bold company data-id table-col-1'})\n \n jahresliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell float-left-for-sd table-col-4'})\n \n anzahlliste = soup.find('div', {'class':'resultpart result-body'})\\\n .find_all('div', {'class':'display-table-cell text-align-center float-left-for-sd table-col-5'})\n \n lehrstelle = soup.find('ul',{'class':'ui-autocomplete full-list '})\\\n .find_all('a')\n lst = []\n \n for ort, firma, jahr, anzahl,lehr in zip(ortsliste,firmenliste,jahresliste, anzahlliste,lehrstelle):\n \n mini_dict = {'Ort':ort.text,\n 'Firma':firma.text,\n 'Jahr':jahr.text,\n 'Anzahl':int(anzahl.text.replace(' Lehrstelle(n)\\n','').replace('\\n','')),\n 'Lehrstelle':lehr['data-value']}\n lst.append(mini_dict)\n \n return pd.DataFrame(lst).to_csv(\"d/\"+str(datetime.datetime.now())+\".csv\")\n \n except:\n return pd.DataFrame([{'Ort':'Keine Treffer',\n 'Firma':'Keine Treffer',\n 'Jahr':'Keine Treffer',\n 'Anzahl':'Keine Treffer'}])\n \n#Bauen wir Listen aller Job-IDs\nr = requests.get('https://www.berufsberatung.ch/dyn/show/2930')\nsoup = BeautifulSoup(r.text, 'lxml')\nids = []\nfor elem in soup.find('ul',{'class':'ui-autocomplete full-list '}).find_all('a'):\n elem = \"sw_\" + elem['data-id']\n ids.append(elem)\n \n#Teilen wir diese Listen mit Länge von je 5 Teilen. \n#Das habe ich nicht selber geschrieben, sondern hier geholt:\n#https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks\nidslst = [ids[i:i + 5] for i in range(0, len(ids), 5)]\n\ncount = 0\nfor ids in idslst:\n\n #Starten wir den Chrome-Browser und besuchen die Site\n driver = webdriver.Chrome('/usr/local/bin/chromedriver')\n driver.get(\"https://www.berufsberatung.ch/dyn/show/2930\")\n\n #Bereiten wir die Suche vor \n for elem in ids:\n time.sleep(1) #damit es nicht zu schnell geht\n driver.find_element_by_class_name(\"fs-autocomplete-trigger\").click()\n time.sleep(1)\n driver.find_element_by_id(elem).click()\n \n #Suchen wir\n time.sleep(1)\n driver.find_element_by_id(\"uxfs-action\").click()\n\n #Nun nun sorgen wir dafür, dass alle Ergebnisse anzeigt werden.\n exists = 1\n while(exists==1):\n loadmore = driver.find_element_by_id(\"aSearchPaging\")\n if loadmore.text == \"MEHR ERGEBNISSE ANZEIGEN\":\n driver.find_element_by_id(\"aSearchPaging\").click()\n time.sleep(1)\n else:\n exists = 0\n \n print(count)\n count += 1\n \n lehrstellen(driver.page_source)\n driver.close()",
"_____no_output_____"
]
],
[
[
"Kreieren wir ein kleine .py File und lösen es von der Commandline aus.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb595d32b94c6bf100ab225afe9cb893bac2cbd6 | 575,808 | ipynb | Jupyter Notebook | temporal-difference/Temporal_Difference_Solution.ipynb | gogunubuntu/deep-reinforcement-learning | f2ed7a6900cb771deea16b993e834b23b9b2efab | [
"MIT"
] | null | null | null | temporal-difference/Temporal_Difference_Solution.ipynb | gogunubuntu/deep-reinforcement-learning | f2ed7a6900cb771deea16b993e834b23b9b2efab | [
"MIT"
] | null | null | null | temporal-difference/Temporal_Difference_Solution.ipynb | gogunubuntu/deep-reinforcement-learning | f2ed7a6900cb771deea16b993e834b23b9b2efab | [
"MIT"
] | null | null | null | 797.518006 | 53,201 | 0.773042 | [
[
[
"# Temporal-Difference Methods\n\nIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.\n\nWhile we have provided some starter code, you are welcome to erase these hints and write your code from scratch.\n\n---\n\n### Part 0: Explore CliffWalkingEnv\n\nWe begin by importing the necessary packages.",
"_____no_output_____"
]
],
[
[
"import sys\nimport gym\nimport numpy as np\nimport random\nimport math\nfrom collections import defaultdict, deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport check_test\nfrom plot_utils import plot_values",
"_____no_output_____"
]
],
[
[
"Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.",
"_____no_output_____"
]
],
[
[
"env = gym.make('CliffWalking-v0')",
"_____no_output_____"
]
],
[
[
"The agent moves through a $4\\times 12$ gridworld, with states numbered as follows:\n```\n[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],\n [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],\n [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35],\n [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]\n```\nAt the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.\n\nThe agent has 4 potential actions:\n```\nUP = 0\nRIGHT = 1\nDOWN = 2\nLEFT = 3\n```\n\nThus, $\\mathcal{S}^+=\\{0, 1, \\ldots, 47\\}$, and $\\mathcal{A} =\\{0, 1, 2, 3\\}$. Verify this by running the code cell below.",
"_____no_output_____"
]
],
[
[
"print(env.action_space)\nprint(env.observation_space)",
"Discrete(4)\nDiscrete(48)\n"
]
],
[
[
"In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function.\n\n_**Note**: You can safely ignore the values of the cliff \"states\" as these are not true states from which the agent can make decisions. For the cliff \"states\", the state-value function is not well-defined._",
"_____no_output_____"
]
],
[
[
"# define the optimal state-value function\nV_opt = np.zeros((4,12))\nV_opt[0][0:13] = -np.arange(3, 15)[::-1]\nV_opt[1][0:13] = -np.arange(3, 15)[::-1] + 1\nV_opt[2][0:13] = -np.arange(3, 15)[::-1] + 2\nV_opt[3][0] = -13\n\nplot_values(V_opt)",
"_____no_output_____"
]
],
[
[
"### Part 1: TD Control: Sarsa\n\nIn this section, you will write your own implementation of the Sarsa control algorithm.\n\nYour algorithm has four arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `alpha`: This is the step-size parameter for the update step.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.\n\nPlease complete the function in the code cell below.\n\n(_Feel free to define additional functions to help you to organize your code._)",
"_____no_output_____"
]
],
[
[
"def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None):\n \"\"\"Returns updated Q-value for the most recent experience.\"\"\"\n current = Q[state][action] # estimate in Q-table (for current state, action pair)\n # get value of state, action pair at next time step\n Qsa_next = Q[next_state][next_action] if next_state is not None else 0 \n target = reward + (gamma * Qsa_next) # construct TD target\n new_value = current + (alpha * (target - current)) # get updated value\n return new_value\n\ndef epsilon_greedy(Q, state, nA, eps):\n \"\"\"Selects epsilon-greedy action for supplied state.\n \n Params\n ======\n Q (dictionary): action-value function\n state (int): current state\n nA (int): number actions in the environment\n eps (float): epsilon\n \"\"\"\n if random.random() > eps: # select greedy action with probability epsilon\n return np.argmax(Q[state])\n else: # otherwise, select an action randomly\n return random.choice(np.arange(env.action_space.n))",
"_____no_output_____"
],
[
"def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):\n nA = env.action_space.n # number of actions\n Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays\n \n # monitor performance\n tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores\n avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes\n \n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{}\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush() \n score = 0 # initialize score\n state = env.reset() # start episode\n \n eps = 1.0 / i_episode # set value of epsilon\n action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection\n \n while True:\n next_state, reward, done, info = env.step(action) # take action A, observe R, S'\n score += reward # add reward to agent's score\n if not done:\n next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action\n Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \\\n state, action, reward, next_state, next_action)\n \n state = next_state # S <- S'\n action = next_action # A <- A'\n if done:\n Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \\\n state, action, reward)\n tmp_scores.append(score) # append score\n break\n if (i_episode % plot_every == 0):\n avg_scores.append(np.mean(tmp_scores))\n\n # plot performance\n plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))\n plt.xlabel('Episode Number')\n plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)\n plt.show()\n # print best 100-episode performance\n print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) \n return Q",
"_____no_output_____"
]
],
[
[
"Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. \n\nIf the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.",
"_____no_output_____"
]
],
[
[
"# obtain the estimated optimal policy and corresponding action-value function\nQ_sarsa = sarsa(env, 5000, .01)\n\n# print the estimated optimal policy\npolicy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12)\ncheck_test.run_check('td_control_check', policy_sarsa)\nprint(\"\\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):\")\nprint(policy_sarsa)\n\n# plot the estimated optimal state-value function\nV_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)])\nplot_values(V_sarsa)",
"Episode 5000/5000"
]
],
[
[
"### Part 2: TD Control: Q-learning\n\nIn this section, you will write your own implementation of the Q-learning control algorithm.\n\nYour algorithm has four arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `alpha`: This is the step-size parameter for the update step.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.\n\nPlease complete the function in the code cell below.\n\n(_Feel free to define additional functions to help you to organize your code._)",
"_____no_output_____"
]
],
[
[
"def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None):\n \"\"\"Returns updated Q-value for the most recent experience.\"\"\"\n current = Q[state][action] # estimate in Q-table (for current state, action pair)\n Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state \n target = reward + (gamma * Qsa_next) # construct TD target\n new_value = current + (alpha * (target - current)) # get updated value \n return new_value",
"_____no_output_____"
],
[
"def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100):\n \"\"\"Q-Learning - TD Control\n \n Params\n ======\n num_episodes (int): number of episodes to run the algorithm\n alpha (float): learning rate\n gamma (float): discount factor\n plot_every (int): number of episodes to use when calculating average score\n \"\"\"\n nA = env.action_space.n # number of actions\n Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays\n \n # monitor performance\n tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores\n avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes\n \n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{}\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n score = 0 # initialize score\n state = env.reset() # start episode\n eps = 1.0 / i_episode # set value of epsilon\n \n while True:\n action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection\n next_state, reward, done, info = env.step(action) # take action A, observe R, S'\n score += reward # add reward to agent's score\n Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \\\n state, action, reward, next_state) \n state = next_state # S <- S'\n if done:\n tmp_scores.append(score) # append score\n break\n if (i_episode % plot_every == 0):\n avg_scores.append(np.mean(tmp_scores))\n \n # plot performance\n plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))\n plt.xlabel('Episode Number')\n plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)\n plt.show()\n # print best 100-episode performance\n print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))\n return Q",
"_____no_output_____"
]
],
[
[
"Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. \n\nIf the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.",
"_____no_output_____"
]
],
[
[
"# obtain the estimated optimal policy and corresponding action-value function\nQ_sarsamax = q_learning(env, 5000, .01)\n\n# print the estimated optimal policy\npolicy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12))\ncheck_test.run_check('td_control_check', policy_sarsamax)\nprint(\"\\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):\")\nprint(policy_sarsamax)\n\n# plot the estimated optimal state-value function\nplot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])",
"Episode 5000/5000"
]
],
[
[
"### Part 3: TD Control: Expected Sarsa\n\nIn this section, you will write your own implementation of the Expected Sarsa control algorithm.\n\nYour algorithm has four arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `alpha`: This is the step-size parameter for the update step.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.\n\nPlease complete the function in the code cell below.\n\n(_Feel free to define additional functions to help you to organize your code._)",
"_____no_output_____"
]
],
[
[
"def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None):\n \"\"\"Returns updated Q-value for the most recent experience.\"\"\"\n current = Q[state][action] # estimate in Q-table (for current state, action pair)\n policy_s = np.ones(nA) * eps / nA # current policy (for next state S')\n policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action\n Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step\n target = reward + (gamma * Qsa_next) # construct target\n new_value = current + (alpha * (target - current)) # get updated value \n return new_value",
"_____no_output_____"
],
[
"def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):\n \"\"\"Expected SARSA - TD Control\n \n Params\n ======\n num_episodes (int): number of episodes to run the algorithm\n alpha (float): step-size parameters for the update step\n gamma (float): discount factor\n plot_every (int): number of episodes to use when calculating average score\n \"\"\"\n nA = env.action_space.n # number of actions\n Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays\n \n # monitor performance\n tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores\n avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes\n \n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{}\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n \n score = 0 # initialize score\n state = env.reset() # start episode\n eps = 0.005 # set value of epsilon\n \n while True:\n action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection\n next_state, reward, done, info = env.step(action) # take action A, observe R, S'\n score += reward # add reward to agent's score\n # update Q\n Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \\\n state, action, reward, next_state) \n state = next_state # S <- S'\n if done:\n tmp_scores.append(score) # append score\n break\n if (i_episode % plot_every == 0):\n avg_scores.append(np.mean(tmp_scores))\n \n # plot performance\n plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))\n plt.xlabel('Episode Number')\n plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)\n plt.show()\n # print best 100-episode performance\n print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))\n return Q",
"_____no_output_____"
]
],
[
[
"Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. \n\nIf the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.",
"_____no_output_____"
]
],
[
[
"# obtain the estimated optimal policy and corresponding action-value function\nQ_expsarsa = expected_sarsa(env, 5000, 1)\n\n# print the estimated optimal policy\npolicy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12)\ncheck_test.run_check('td_control_check', policy_expsarsa)\nprint(\"\\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):\")\nprint(policy_expsarsa)\n\n# plot the estimated optimal state-value function\nplot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])",
"Episode 5000/5000"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb596daa8d673e37f30b69f7bd90a7a765669058 | 123,113 | ipynb | Jupyter Notebook | docs/tutorials/03_Driver_Agent.ipynb | ABedychaj/MaaSSim | d11fca429f509c2fa597a623e1db5fb608aec02f | [
"MIT"
] | 11 | 2020-09-23T12:29:36.000Z | 2022-02-17T23:10:25.000Z | docs/tutorials/03_Driver_Agent.ipynb | ABedychaj/MaaSSim | d11fca429f509c2fa597a623e1db5fb608aec02f | [
"MIT"
] | 1 | 2020-12-11T10:00:37.000Z | 2020-12-11T10:00:37.000Z | docs/tutorials/03_Driver_Agent.ipynb | ABedychaj/MaaSSim | d11fca429f509c2fa597a623e1db5fb608aec02f | [
"MIT"
] | 7 | 2020-10-13T17:14:34.000Z | 2022-02-17T15:27:21.000Z | 191.466563 | 103,796 | 0.882831 | [
[
[
">\n> # MaaS Sim tutorial\n>\n> ## Driver Agent\n>\n-----\n\nDriver in the MaaSSim is a `process` of `simpy.Environment`. \n\nIt is executed as a sequence of steps `driverEvent`. \n\nIt operates in an infinite loop, waiting for requests and serving them.\n\nMain routine is `VehicleAgent.loop_day` \n\nDrivers are instantiated from `vechiles` of `inData` while creating MaaSSim simulator.\n\nDefined as a class in `driver.py`",
"_____no_output_____"
]
],
[
[
"import os, sys # add MaaSSim to path (not needed if MaaSSim is already in path)\nmodule_path = os.path.abspath(os.path.join('../..'))\nif module_path not in sys.path:\n sys.path.append(module_path)",
"_____no_output_____"
],
[
"import MaaSSim\nimport MaaSSim.utils\nimport logging\nfrom MaaSSim.simulators import simulate\nfrom MaaSSim.driver import VehicleAgent, driverEvent\nfrom MaaSSim.data_structures import structures as inData",
"_____no_output_____"
],
[
"params = MaaSSim.utils.get_config('../../data/config/default.json')",
"_____no_output_____"
],
[
"inData = MaaSSim.utils.load_G(inData, params, stats = True) #\ninData = MaaSSim.utils.prep_supply_and_demand(inData, params) # generate supply and demand\nsim = simulate(inData, params = params, _print = False, logger_level = logging.WARNING) ",
"13-10-20 09:59:41-WARNING-Setting up 1h simulation at 2020-10-13 09:32:15 for 5 vehicles and 20 passengers in Nootdorp, Netherlands\n13-10-20 09:59:42-WARNING-simulation time 0.9 s\n13-10-20 09:59:42-WARNING-assertion tests for simulation results - passed\n"
]
],
[
[
"## VehicleAgent",
"_____no_output_____"
]
],
[
[
"self = sim.vehs[3]",
"_____no_output_____"
],
[
"from MaaSSim.visualizations import plot_veh_sim\nplot_veh_sim(sim, self.id)",
"/Users/rkucharski/Documents/GitHub/MaaSSim/MaaSSim/visualizations.py:113: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n t['node'] = t.pos\n"
],
[
"self.id",
"_____no_output_____"
],
[
"self.veh",
"_____no_output_____"
]
],
[
[
"----\n#### events ",
"_____no_output_____"
]
],
[
[
"self.requested = self.sim.env.event() # triggers when vehicle is requested \nself.arrived_at_pick_up = self.sim.env.event() # triggers when vehicle is arrived at pick up\nself.arrived = self.sim.env.event() # triggers when vehicle is arrived at destination",
"_____no_output_____"
]
],
[
[
"----\n#### methods\nfunctions from kwargs at the sim level",
"_____no_output_____"
]
],
[
[
"self.f_driver_learn = self.sim.functions.f_driver_learn # exit from the system due to prev exp\nself.f_driver_out = self.sim.functions.f_driver_out # exit from the system due to prev exp\nself.f_driver_decline = self.sim.functions.f_driver_decline # reject the incoming request\nself.f_driver_repos = self.sim.functions.f_driver_repos # reposition after you are free again",
"_____no_output_____"
]
],
[
[
"---------\n#### passStatus sequence ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\npd.DataFrame([[s,s.name,s.value] for s in driverEvent], \n columns = ['status','name','value'])",
"_____no_output_____"
]
],
[
[
"---------\n#### report",
"_____no_output_____"
]
],
[
[
"self.veh.status = driverEvent.STARTS_DAY\npd.DataFrame(self.myrides)",
"_____no_output_____"
]
],
[
[
"---\n(c) Rafał Kucharski, Delft, 2020",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5977bac4003f86f141593719b1dcbd9a2b09db | 159,751 | ipynb | Jupyter Notebook | src/Regression.ipynb | pvchaumier/ml_by_example | 81a6ff8107a2a827fe66af53320503bba77959c8 | [
"0BSD"
] | 1 | 2016-03-22T18:49:54.000Z | 2016-03-22T18:49:54.000Z | src/Regression.ipynb | pvchaumier/ml_by_example | 81a6ff8107a2a827fe66af53320503bba77959c8 | [
"0BSD"
] | null | null | null | src/Regression.ipynb | pvchaumier/ml_by_example | 81a6ff8107a2a827fe66af53320503bba77959c8 | [
"0BSD"
] | null | null | null | 417.104439 | 61,306 | 0.923994 | [
[
[
"# Ridge Regression\n\n## Goal\n\nGiven a dataset with continuous inputs and corresponding outputs, the objective is to find a function that matches the two as accurately as possible. This function is usually called the target function.\n\nIn the case of a ridge regression, the idea is to modellize the target function as a linear sum of functions (that can be non linear and are generally not). Thus, with f the target function, $\\phi_i$ a base function and $w_i$ its weight in the linear sum, we suppose that:\n$$f(x) = \\sum w_i \\phi_i(x)$$\n\nThe parameters that must be found are the weights $w_i$ for each base function $\\phi_i$. This is done by minimizing the [root mean square error](https://en.wikipedia.org/wiki/Root-mean-square_deviation).\n\nThere is a closed solution to this problem given by the following equation $W = (\\Phi^T \\Phi)^{-1} \\Phi^T Y$ with:\n- $d$ the number of base functions\n- $W = (w_0, ..., w_d)$ the weight vector\n- $Y$ the output vector\n- $\\Phi(X) = (\\phi_0(X)^T, \\phi_1(X)^T, ..., \\phi_d(X)^T)$, $\\phi_0(X) = \\mathbf{1}$ and $\\phi_i(X) = (\\phi_i(X_1), ... \\phi_i(X_n))$.\n\nIf you want more details, I find that the best explanation is the one given in the book [Pattern Recognition and Machine Learning](http://research.microsoft.com/en-us/um/people/cmbishop/PRML/) by C. Bishop.\n\n## Implementation\n\nThe following implementation does exactly what is explained above and uses three different types of kernel: \n- linear $f(x) = w_0 + w_1 x$\n- polynomial $f(x) = \\sum_{i=0}^d w_i x^i$ with d the degree of the polynome. Notice that d = 1 is the linear case.\n- gaussian $f(x) = \\sum w_i \\exp(-\\frac{x - b_i}{2 \\sigma^2})$ with $b_i$ define the location of the base function number $i$ (they are usually taken at random within the dataset) and $\\sigma$ a parameter tuning the width of the functions. Here the \"width\" is the same for all base function but you could make them different for each of them.\n\nThe steps are:\n- normalization\n- building the $\\Phi$ matrix\n- computing the weights $W$\n- plotting the found function and the dataset",
"_____no_output_____"
]
],
[
[
"# to display plots within the notebook\n%matplotlib inline\n# to define the size of the plotted images\nfrom pylab import rcParams\nrcParams['figure.figsize'] = (15, 10)\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom numpy.linalg import inv\n\nfrom fct import normalize_pd",
"_____no_output_____"
]
],
[
[
"The X matrix correspond to the inputs and the Y matrix to the outputs to predict.",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('datasets/data_regression.csv')\nX = data['X']\nY = data['Y']\n\n# Normalization\nX = np.asmatrix(normalize_pd(X)).T\nY = np.asmatrix(normalize_pd(Y)).T",
"_____no_output_____"
]
],
[
[
"## Linear regression\n\nHere we have $\\Phi(X) = X$. The function we look for has the form $f(x) = ax + b$.",
"_____no_output_____"
]
],
[
[
"def linear_regression(X, Y):\n # Building the Phi matrix\n Ones = np.ones((X.shape[0], 1))\n phi_X = np.hstack((Ones, X))\n\n # Calculating the weights\n w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)\n \n # Predicting the output values\n Y_linear_reg = np.dot(phi_X, w)\n\n return Y_linear_reg",
"_____no_output_____"
],
[
"Y_linear_reg = linear_regression(X, Y)\n\nplt.plot(X, Y, '.')\nplt.plot(X, Y_linear_reg, 'r')\nplt.title('Linear Regression')\nplt.legend(['Data', 'Linear Regression'])",
"_____no_output_____"
]
],
[
[
"The obtained solution does not represent the data very well. It is because the power of representation is too low compared to the target function. This is usually referred to as **underfitting**.",
"_____no_output_____"
],
[
"## Polynomial Regression\n\nNow, we approximate the target function by a polynom $f(x) = w_0 + w_1 x + w_2 x^2 + ... + w_d x^d$ with $d$ the degree of the polynom.\n\nWe plotted the results obtained with different degrees.",
"_____no_output_____"
]
],
[
[
"def polynomial_regression(X, Y, degree):\n # Building the Phi matrix\n Ones = np.ones((X.shape[0], 1))\n # Add a column of ones\n phi_X = np.hstack((Ones, X))\n\n # add a column of X elevated to all the powers from 2 to degree\n for i in range(2, degree + 1):\n # calculate the vector X to the power i and add it to the Phi matrix\n X_power = np.array(X) ** i\n phi_X = np.hstack((phi_X, np.asmatrix(X_power)))\n\n # Calculating the weights\n w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)\n \n # Predicting the output values\n Y_poly_reg = np.dot(phi_X, w)\n\n return Y_poly_reg",
"_____no_output_____"
],
[
"# Degrees to plot you can change these values to\n# see how the degree of the polynom affects the \n# predicted function\ndegrees = [1, 2, 20]\nlegend = ['Data']\n\nplt.plot(X, Y, '.')\nfor degree in degrees:\n Y_poly_reg = polynomial_regression(X, Y, degree)\n plt.plot(X, Y_poly_reg)\n legend.append('degree ' + str(degree))\n \nplt.legend(legend)\nplt.title('Polynomial regression results depending on the degree of the polynome used')",
"_____no_output_____"
]
],
[
[
"The linear case is still underfitting but now, we see that the polynom of degree 20 is too sensitive to the data, especially around $[-2.5, -1.5]$. This phenomena is called **overfitting**: the model starts fitting the noise in the data as well and looses its capacity to generalize.",
"_____no_output_____"
],
[
"## Regression with kernel gaussian\n\nLastly, we look at function of the type $f(x) = \\sum \\phi_i(x)$ with $\\phi_i(x) = \\exp({-\\frac{x - b_i}{\\sigma^2}}$). $b_i$ is called the base and $\\sigma$ is its width.\n\nUsually, the $b_i$ are taken randomly within the dataset. That is what I did in the implementation with b the number of bases.\n\nIn the plot, there is the base function used to compute the regressed function and the latter.",
"_____no_output_____"
]
],
[
[
"def gaussian_regression(X, Y, b, sigma, return_base=True):\n \"\"\"b is the number of bases to use, sigma is the variance of the\n base functions.\"\"\"\n \n # Building the Phi matrix\n Ones = np.ones((X.shape[0], 1))\n # Add a column of ones\n phi_X = np.hstack((Ones, X))\n \n # Choose randomly without replacement b values from X\n # to be the center of the base functions\n X_array = np.array(X).reshape(1, -1)[0]\n bases = np.random.choice(X_array, b, replace=False)\n \n bases_function = []\n for i in range(1, b):\n base_function = np.exp(-0.5 * (((X_array - bases[i - 1] * \n np.ones(len(X_array))) / sigma) ** 2))\n bases_function.append(base_function)\n phi_X = np.hstack((phi_X, np.asmatrix(base_function).T))\n\n w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)\n\n if return_base:\n return np.dot(phi_X, w), bases_function\n else:\n return np.dot(phi_X, w)",
"_____no_output_____"
],
[
"# By changing this value, you will change the width of the base functions\nsigma = 0.2\n# b is the number of base functions used\nb = 5\nY_gauss_reg, bases_function = gaussian_regression(X, Y, b, sigma)\n\n# Plotting the base functions and the dataset\nplt.plot(X, Y, '.')\nplt.plot(X, Y_gauss_reg)\n\nlegend = ['Data', 'Regression result']\nfor i, base_function in enumerate(bases_function):\n plt.plot(X, base_function)\n legend.append('Base function n°' + str(i))\n\nplt.legend(legend)\nplt.title('Regression with gaussian base functions')",
"_____no_output_____"
]
],
[
[
"We can observe that here the sigma is too small. Some part of the dataset are too far away from the bases to be taken into accoutn.\nIf you change the <code>sigma</code> in the code to 0.5 and then 1. You will notice how the output function will get closer to the data.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
cb59832bcc2523d6acc2090216415ee817cf2609 | 29,965 | ipynb | Jupyter Notebook | Kaggle 30 Days Challange/30-days-of-kaggle-challenge.ipynb | kowshir-bitto/Kaggle-Competition | 6331b970a559c017fe9ece6557d17de220ab4a17 | [
"MIT"
] | 3 | 2021-09-02T18:15:11.000Z | 2021-09-03T20:10:42.000Z | Kaggle 30 Days Challange/30-days-of-kaggle-challenge.ipynb | kowshir-bitto/Kaggle-Competition | 6331b970a559c017fe9ece6557d17de220ab4a17 | [
"MIT"
] | null | null | null | Kaggle 30 Days Challange/30-days-of-kaggle-challenge.ipynb | kowshir-bitto/Kaggle-Competition | 6331b970a559c017fe9ece6557d17de220ab4a17 | [
"MIT"
] | null | null | null | 29,965 | 29,965 | 0.633472 | [
[
[
"import numpy as np \nimport pandas as pd \nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))",
"/kaggle/input/30-days-of-ml/sample_submission.csv\n/kaggle/input/30-days-of-ml/train.csv\n/kaggle/input/30-days-of-ml/test.csv\n"
],
[
"train = pd.read_csv(\"/kaggle/input/30-days-of-ml/train.csv\")\ntest = pd.read_csv(\"/kaggle/input/30-days-of-ml/test.csv\")\nsample_submission = pd.read_csv(\"/kaggle/input/30-days-of-ml/sample_submission.csv\")",
"_____no_output_____"
],
[
"from pandas.plotting._misc import scatter_matrix\nfrom sklearn.model_selection import train_test_split \nfrom sklearn import metrics\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.preprocessing import OrdinalEncoder\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom mlxtend.regressor import StackingRegressor\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.tree import DecisionTreeRegressor\nfrom xgboost import XGBRegressor\nfrom lightgbm import LGBMRegressor\n%matplotlib inline\n",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"train.isnull().sum()",
"_____no_output_____"
],
[
"s = (train.dtypes == 'object')\nobject_cols = list(s[s].index)",
"_____no_output_____"
],
[
"ordinal_encoder = OrdinalEncoder()\ntrain[object_cols] = ordinal_encoder.fit_transform(train[object_cols])",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"X_Data= train.drop(['target'],axis=1)\nY_Data= train['target']",
"_____no_output_____"
],
[
"x_train,x_test,y_train,y_test = train_test_split(X_Data,Y_Data,test_size=.2)",
"_____no_output_____"
],
[
"knn = KNeighborsRegressor(n_neighbors=5)",
"_____no_output_____"
],
[
"knn.fit(x_train,y_train)\npredicted=knn.predict(x_test)",
"_____no_output_____"
],
[
"print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, predicted)))",
"Root Mean Squared Error: 0.8182643587738403\n"
],
[
"tree_clf = DecisionTreeRegressor(max_depth=2,random_state=42)",
"_____no_output_____"
],
[
"tree_clf.fit(X_Data,Y_Data)",
"_____no_output_____"
],
[
"tree_clf.score(X_Data,Y_Data)",
"_____no_output_____"
],
[
"prediction = tree_clf.predict(x_test)\nprint('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))",
"Root Mean Squared Error: 0.7436803125240358\n"
],
[
"rnd = RandomForestRegressor(max_depth=10)\nrnd.fit(x_train,y_train)\nrnd.score(x_test,y_test)\nprediction = rnd.predict(x_test)\nprint('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))",
"Root Mean Squared Error: 0.735282199252681\n"
],
[
"dtc=DecisionTreeRegressor()\nknnc=KNeighborsRegressor()\nrfc=RandomForestRegressor()\nstregr = StackingRegressor(regressors=[dtc,knnc,rfc], \n meta_regressor=knnc)",
"_____no_output_____"
],
[
"stregr.fit(x_train,y_train)\nstregr.score(x_test,y_test)\nprediction = stregr.predict(x_test)\nprint('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))",
"Root Mean Squared Error: 0.9756265357065537\n"
],
[
"from sklearn import model_selection\ntrain = pd.read_csv(\"../input/30-days-of-ml/train.csv\")\ntest = pd.read_csv(\"../input/30-days-of-ml/test.csv\")\nprint(train.shape,test.shape)\ntrain['kfold']=-1\nkfold = model_selection.KFold(n_splits=10, shuffle= True, random_state = 42)\nfor fold, (train_indicies, valid_indicies) in enumerate(kfold.split(X=train)):\n train.loc[valid_indicies,'kfold'] = fold\nprint(train.kfold.value_counts())\ntrain.to_csv(\"trainfold_10.csv\",index=False)",
"(300000, 26) (200000, 25)\n0 30000\n1 30000\n2 30000\n3 30000\n4 30000\n5 30000\n6 30000\n7 30000\n8 30000\n9 30000\nName: kfold, dtype: int64\n"
],
[
"train = pd.read_csv(\"./trainfold_10.csv\")\ntest = pd.read_csv(\"../input/30-days-of-ml/test.csv\")\nsample_submission = pd.read_csv(\"../input/30-days-of-ml/sample_submission.csv\")\nprint(train.shape,test.shape)\ntrain.sample()",
"(300000, 27) (200000, 25)\n"
],
[
"from sklearn import preprocessing\nfinal_predictions = []\nscore= []\nuseful_features = [c for c in train.columns if c not in (\"id\",\"target\",\"kfold\")]\nobject_cols = [col for col in useful_features if 'cat' in col]\nnumerical_cols = [col for col in useful_features if 'cont' in col]\ntest = test[useful_features]\nfor fold in range(10):\n xtrain = train[train.kfold != fold].reset_index(drop=True)\n xvalid = train[train.kfold == fold].reset_index(drop=True)\n xtest = test.copy()\n ytrain = xtrain.target\n yvalid = xvalid.target\n xtrain = xtrain[useful_features]\n xvalid = xvalid[useful_features]\n ordinal_encoder = OrdinalEncoder()\n xtrain[object_cols] = ordinal_encoder.fit_transform(xtrain[object_cols])\n xvalid[object_cols] = ordinal_encoder.transform(xvalid[object_cols])\n xtest[object_cols] = ordinal_encoder.transform(xtest[object_cols])\n scaler = preprocessing.StandardScaler()\n xtrain[numerical_cols] = scaler.fit_transform(xtrain[numerical_cols])\n xvalid[numerical_cols] = scaler.transform(xvalid[numerical_cols])\n xtest[numerical_cols] = scaler.transform(xtest[numerical_cols])\n xgb_params = {\n 'learning_rate': 0.03628302216953097,\n 'subsample': 0.7875490025178,\n 'colsample_bytree': 0.11807135201147,\n 'max_depth': 3,\n 'booster': 'gbtree', \n 'reg_lambda': 0.0008746338866473539,\n 'reg_alpha': 23.13181079976304,\n 'random_state':40,\n 'n_estimators':10000\n }\n model= XGBRegressor()\n model.fit(xtrain,ytrain,early_stopping_rounds=300,eval_set=[(xvalid,yvalid)],verbose=2000)\n preds_valid = model.predict(xvalid)\n test_pre = model.predict(xtest)\n final_predictions.append(test_pre)\n rms = mean_squared_error(yvalid,preds_valid,squared=False)\n score.append(rms)\n print(f\"fold:{fold},rmse:{rms}\")\nprint(np.mean(score),np.std(score))",
"[0]\tvalidation_0-rmse:5.48142\n[99]\tvalidation_0-rmse:0.72311\nfold:0,rmse:0.7230218995635727\n[0]\tvalidation_0-rmse:5.46264\n[99]\tvalidation_0-rmse:0.72366\nfold:1,rmse:0.723384142820143\n[0]\tvalidation_0-rmse:5.46902\n[99]\tvalidation_0-rmse:0.72168\nfold:2,rmse:0.7215827127377072\n[0]\tvalidation_0-rmse:5.46890\n[99]\tvalidation_0-rmse:0.72489\nfold:3,rmse:0.7248021160004191\n[0]\tvalidation_0-rmse:5.46339\n[99]\tvalidation_0-rmse:0.72902\nfold:4,rmse:0.7287440413925509\n[0]\tvalidation_0-rmse:5.46997\n[99]\tvalidation_0-rmse:0.72027\nfold:5,rmse:0.7200044728690408\n[0]\tvalidation_0-rmse:5.47118\n[99]\tvalidation_0-rmse:0.72426\nfold:6,rmse:0.7241270093292812\n[0]\tvalidation_0-rmse:5.46705\n[99]\tvalidation_0-rmse:0.72446\nfold:7,rmse:0.7244227862849206\n[0]\tvalidation_0-rmse:5.47235\n[99]\tvalidation_0-rmse:0.72748\nfold:8,rmse:0.72747197385565\n[0]\tvalidation_0-rmse:5.47680\n[99]\tvalidation_0-rmse:0.71938\nfold:9,rmse:0.7193178229680087\n0.7236878977821294 0.002819414477979906\n"
],
[
"preds = np.mean(np.column_stack(final_predictions),axis=1)\nprint(preds)\nsample_submission.target = preds\nsample_submission.to_csv(\"submission.csv\",index=False)\nprint(\"success\")",
"[7.9965744 8.318919 8.396938 ... 8.386236 8.091183 7.9360657]\nsuccess\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5997dd2df485df582643aef39f09d6c32997a7 | 403,356 | ipynb | Jupyter Notebook | notebooks/17/2/Training_and_Testing.ipynb | choldgraf/textbook-jekyll | 1259b1df346f2091db53ca09c46be7d320d823b2 | [
"MIT"
] | null | null | null | notebooks/17/2/Training_and_Testing.ipynb | choldgraf/textbook-jekyll | 1259b1df346f2091db53ca09c46be7d320d823b2 | [
"MIT"
] | null | null | null | notebooks/17/2/Training_and_Testing.ipynb | choldgraf/textbook-jekyll | 1259b1df346f2091db53ca09c46be7d320d823b2 | [
"MIT"
] | null | null | null | 1,003.373134 | 170,484 | 0.955573 | [
[
[
"# HIDDEN\nimport matplotlib\n#matplotlib.use('Agg')\npath_data = '../../../data/'\nfrom datascience import *\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nimport math\nimport scipy.stats as stats\nplt.style.use('fivethirtyeight')",
"_____no_output_____"
],
[
"# HIDDEN\n\ndef standard_units(x):\n return (x - np.mean(x))/np.std(x)",
"_____no_output_____"
],
[
"# HIDDEN\n\n# HIDDEN\n\ndef distance(pt1, pt2):\n return np.sqrt(np.sum((pt1 - pt2)**2))\n\ndef all_dists(training, p):\n attributes = training.drop('Class')\n def dist_point_row(row):\n return distance(np.array(row), p)\n return attributes.apply(dist_point_row)\n\ndef table_with_distances(training, p):\n return training.with_column('Distance', all_dists(training, p))\n\ndef closest(training, p, k):\n with_dists = table_with_distances(training, p)\n sorted_by_dist = with_dists.sort('Distance')\n topk = sorted_by_dist.take(np.arange(k))\n return topk\n\ndef majority(topkclasses):\n ones = topkclasses.where('Class', are.equal_to(1)).num_rows\n zeros = topkclasses.where('Class', are.equal_to(0)).num_rows\n if ones > zeros:\n return 1\n else:\n return 0\n\ndef classify(training, p, k):\n closestk = closest(training, p, k)\n topkclasses = closestk.select('Class')\n return majority(topkclasses)",
"_____no_output_____"
],
[
"# HIDDEN\n\ndef classify_grid(training, test, k):\n c = make_array()\n for i in range(test.num_rows):\n # Run the classifier on the ith patient in the test set\n c = np.append(c, classify(training, make_array(test.row(i)), k)) \n return c",
"_____no_output_____"
],
[
"# HIDDEN \nckd = Table.read_table(path_data + 'ckd.csv').relabeled('Blood Glucose Random', 'Glucose')\nckd = Table().with_columns(\n 'Hemoglobin', standard_units(ckd.column('Hemoglobin')),\n 'Glucose', standard_units(ckd.column('Glucose')),\n 'White Blood Cell Count', standard_units(ckd.column('White Blood Cell Count')),\n 'Class', ckd.column('Class')\n)\ncolor_table = Table().with_columns(\n 'Class', make_array(1, 0),\n 'Color', make_array('darkblue', 'gold')\n)\nckd = ckd.join('Class', color_table)",
"_____no_output_____"
]
],
[
[
"### Training and Testing ###\nHow good is our nearest neighbor classifier? To answer this we'll need to find out how frequently our classifications are correct. If a patient has chronic kidney disease, how likely is our classifier to pick that up?\n\nIf the patient is in our training set, we can find out immediately. We already know what class the patient is in. So we can just compare our prediction and the patient's true class.\n\nBut the point of the classifier is to make predictions for *new* patients not in our training set. We don't know what class these patients are in but we can make a prediction based on our classifier. How to find out whether the prediction is correct?\n\nOne way is to wait for further medical tests on the patient and then check whether or not our prediction agrees with the test results. With that approach, by the time we can say how likely our prediction is to be accurate, it is no longer useful for helping the patient.\n\nInstead, we will try our classifier on some patients whose true classes are known. Then, we will compute the proportion of the time our classifier was correct. This proportion will serve as an estimate of the proportion of all new patients whose class our classifier will accurately predict. This is called *testing*.",
"_____no_output_____"
],
[
"### Overly Optimistic \"Testing\" ###\nThe training set offers a very tempting set of patients on whom to test out our classifier, because we know the class of each patient in the training set.\n\nBut let's be careful ... there will be pitfalls ahead if we take this path. An example will show us why.\n\nSuppose we use a 1-nearest neighbor classifier to predict whether a patient has chronic kidney disease, based on glucose and white blood cell count.",
"_____no_output_____"
]
],
[
[
"ckd.scatter('White Blood Cell Count', 'Glucose', colors='Color')",
"_____no_output_____"
]
],
[
[
"Earlier, we said that we expect to get some classifications wrong, because there's some intermingling of blue and gold points in the lower-left.\n\nBut what about the points in the training set, that is, the points already on the scatter? Will we ever mis-classify them?\n\nThe answer is no. Remember that 1-nearest neighbor classification looks for the point *in the training set* that is nearest to the point being classified. Well, if the point being classified is already in the training set, then its nearest neighbor in the training set is itself! And therefore it will be classified as its own color, which will be correct because each point in the training set is already correctly colored.\n\nIn other words, **if we use our training set to \"test\" our 1-nearest neighbor classifier, the classifier will pass the test 100% of the time.**\n\nMission accomplished. What a great classifier! \n\nNo, not so much. A new point in the lower-left might easily be mis-classified, as we noted earlier. \"100% accuracy\" was a nice dream while it lasted.\n\nThe lesson of this example is *not* to use the training set to test a classifier that is based on it.",
"_____no_output_____"
],
[
"### Generating a Test Set ###\nIn earlier chapters, we saw that random sampling could be used to estimate the proportion of individuals in a population that met some criterion. Unfortunately, we have just seen that the training set is not like a random sample from the population of all patients, in one important respect: Our classifier guesses correctly for a higher proportion of individuals in the training set than it does for individuals in the population.\n\nWhen we computed confidence intervals for numerical parameters, we wanted to have many new random samples from a population, but we only had access to a single sample. We solved that problem by taking bootstrap resamples from our sample.\n\nWe will use an analogous idea to test our classifier. We will *create two samples out of the original training set*, use one of the samples as our training set, and *the other one for testing*. \n\nSo we will have three groups of individuals:\n- a training set on which we can do any amount of exploration to build our classifier;\n- a separate testing set on which to try out our classifier and see what fraction of times it classifies correctly;\n- the underlying population of individuals for whom we don't know the true classes; the hope is that our classifier will succeed about as well for these individuals as it did for our testing set.",
"_____no_output_____"
],
[
"How to generate the training and testing sets? You've guessed it – we'll select at random.\n\nThere are 158 individuals in `ckd`. Let's use a random half of them for training and the other half for testing. To do this, we'll shuffle all the rows, take the first 79 as the training set, and the remaining 79 for testing.",
"_____no_output_____"
]
],
[
[
"shuffled_ckd = ckd.sample(with_replacement=False)\ntraining = shuffled_ckd.take(np.arange(79))\ntesting = shuffled_ckd.take(np.arange(79, 158))",
"_____no_output_____"
]
],
[
[
"Now let's construct our classifier based on the points in the training sample:",
"_____no_output_____"
]
],
[
[
"training.scatter('White Blood Cell Count', 'Glucose', colors='Color')\nplt.xlim(-2, 6)\nplt.ylim(-2, 6);",
"_____no_output_____"
]
],
[
[
"We get the following classification regions and decision boundary:",
"_____no_output_____"
]
],
[
[
"# HIDDEN\n\nx_array = make_array()\ny_array = make_array()\nfor x in np.arange(-2, 6.1, 0.25):\n for y in np.arange(-2, 6.1, 0.25):\n x_array = np.append(x_array, x)\n y_array = np.append(y_array, y)\n \ntest_grid = Table().with_columns(\n 'Glucose', x_array,\n 'White Blood Cell Count', y_array\n)",
"_____no_output_____"
],
[
"# HIDDEN\n\nc = classify_grid(training.drop('Hemoglobin', 'Color'), test_grid, 1)",
"_____no_output_____"
],
[
"# HIDDEN\n\ntest_grid = test_grid.with_column('Class', c).join('Class', color_table)\ntest_grid.scatter('White Blood Cell Count', 'Glucose', colors='Color', alpha=0.4, s=30)\n\nplt.xlim(-2, 6)\nplt.ylim(-2, 6);",
"_____no_output_____"
]
],
[
[
"Place the *test* data on this graph and you can see at once that while the classifier got almost all the points right, there are some mistakes. For example, some blue points of the test set fall in the gold region of the classifier.",
"_____no_output_____"
]
],
[
[
"# HIDDEN\n\ntest_grid = test_grid.with_column('Class', c).join('Class', color_table)\ntest_grid.scatter('White Blood Cell Count', 'Glucose', colors='Color', alpha=0.4, s=30)\n\nplt.scatter(testing.column('White Blood Cell Count'), testing.column('Glucose'), c=testing.column('Color'), edgecolor='k')\n\nplt.xlim(-2, 6)\nplt.ylim(-2, 6);",
"_____no_output_____"
]
],
[
[
"Some errors notwithstanding, it looks like the classifier does fairly well on the test set. Assuming that the original sample was drawn randomly from the underlying population, the hope is that the classifier will perform with similar accuracy on the overall population, since the test set was chosen randomly from the original sample.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb599854cc307d7cbe90bb6348ea9dcbbcd62ae4 | 38,496 | ipynb | Jupyter Notebook | 34_activation_func.ipynb | gongsh93/tensorflow-example | 08ed4234764461dcd1cd5a5df1e917d6c7c7cfc1 | [
"MIT"
] | null | null | null | 34_activation_func.ipynb | gongsh93/tensorflow-example | 08ed4234764461dcd1cd5a5df1e917d6c7c7cfc1 | [
"MIT"
] | null | null | null | 34_activation_func.ipynb | gongsh93/tensorflow-example | 08ed4234764461dcd1cd5a5df1e917d6c7c7cfc1 | [
"MIT"
] | null | null | null | 221.241379 | 12,788 | 0.929889 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport math",
"_____no_output_____"
],
[
"# sigmoid\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))",
"_____no_output_____"
],
[
"# hyperbolic tangent\ndef tanh(x):\n return list(map(lambda x : math.tanh(x), x))",
"_____no_output_____"
],
[
"# relu\ndef relu(x):\n result = []\n for ele in x:\n if(ele <= 0):\n result.append(0)\n else:\n result.append(ele)\n \n return result",
"_____no_output_____"
],
[
"x = np.linspace(-4, 4, 100)",
"_____no_output_____"
],
[
"sig = sigmoid(x)\nplt.plot(x, sig)\nplt.show()",
"_____no_output_____"
],
[
"th = tanh(x)\nplt.plot(x, th)\nplt.show()",
"_____no_output_____"
],
[
"rlu = relu(x)\nplt.plot(x, rlu)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb59a13c285fc6432091f5976e5a1223f899d51c | 59,820 | ipynb | Jupyter Notebook | Group_Lasso_Notebooks/example_sparse_group_lasso.ipynb | laporpe/STA_208 | 2baefbaad0f18658ade9c1a5498630b6f917a676 | [
"MIT"
] | 1 | 2021-04-27T18:59:13.000Z | 2021-04-27T18:59:13.000Z | Group_Lasso_Notebooks/example_sparse_group_lasso.ipynb | laporpe/STA_208 | 2baefbaad0f18658ade9c1a5498630b6f917a676 | [
"MIT"
] | null | null | null | Group_Lasso_Notebooks/example_sparse_group_lasso.ipynb | laporpe/STA_208 | 2baefbaad0f18658ade9c1a5498630b6f917a676 | [
"MIT"
] | 2 | 2021-04-27T19:03:18.000Z | 2022-01-11T01:50:57.000Z | 159.095745 | 17,900 | 0.8888 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\nGroupLasso for linear regression with dummy variables\n=====================================================\n\nA sample script for group lasso with dummy variables\n",
"_____no_output_____"
],
[
"Setup\n-----\n\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.linear_model import Ridge\nfrom sklearn.metrics import r2_score\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import OneHotEncoder\n\nfrom group_lasso import GroupLasso\nfrom group_lasso.utils import extract_ohe_groups\n\nnp.random.seed(42)\nGroupLasso.LOG_LOSSES = True",
"_____no_output_____"
]
],
[
[
"Set dataset parameters\n----------------------\n\n",
"_____no_output_____"
]
],
[
[
"num_categories = 30\nmin_options = 2\nmax_options = 10\nnum_datapoints = 10000\nnoise_std = 1",
"_____no_output_____"
]
],
[
[
"Generate data matrix\n--------------------\n\n",
"_____no_output_____"
]
],
[
[
"X_cat = np.empty((num_datapoints, num_categories))\nfor i in range(num_categories):\n X_cat[:, i] = np.random.randint(min_options, max_options, num_datapoints)\n\nohe = OneHotEncoder()\nX = ohe.fit_transform(X_cat)\ngroups = extract_ohe_groups(ohe)\ngroup_sizes = [np.sum(groups == g) for g in np.unique(groups)]\nactive_groups = [np.random.randint(0, 2) for _ in np.unique(groups)]",
"_____no_output_____"
]
],
[
[
"Generate coefficients\n---------------------\n\n",
"_____no_output_____"
]
],
[
[
"w = np.concatenate(\n [\n np.random.standard_normal(group_size) * is_active\n for group_size, is_active in zip(group_sizes, active_groups)\n ]\n)\nw = w.reshape(-1, 1)\ntrue_coefficient_mask = w != 0\nintercept = 2",
"_____no_output_____"
]
],
[
[
"Generate regression targets\n---------------------------\n\n",
"_____no_output_____"
]
],
[
[
"y_true = X @ w + intercept\ny = y_true + np.random.randn(*y_true.shape) * noise_std",
"_____no_output_____"
]
],
[
[
"View noisy data and compute maximum R^2\n---------------------------------------\n\n",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.plot(y, y_true, \".\")\nplt.xlabel(\"Noisy targets\")\nplt.ylabel(\"Noise-free targets\")\n# Use noisy y as true because that is what we would have access\n# to in a real-life setting.\nR2_best = r2_score(y, y_true)",
"_____no_output_____"
]
],
[
[
"Generate pipeline and train it\n------------------------------\n\n",
"_____no_output_____"
]
],
[
[
"pipe = pipe = Pipeline(\n memory=None,\n steps=[\n (\n \"variable_selection\",\n GroupLasso(\n groups=groups,\n group_reg=0.1,\n l1_reg=0,\n scale_reg=None,\n supress_warning=True,\n n_iter=100000,\n frobenius_lipschitz=False,\n ),\n ),\n (\"regressor\", Ridge(alpha=1)),\n ],\n)\npipe.fit(X, y)",
"_____no_output_____"
]
],
[
[
"Extract results and compute performance metrics\n-----------------------------------------------\n\n",
"_____no_output_____"
]
],
[
[
"# Extract from pipeline\nyhat = pipe.predict(X)\nsparsity_mask = pipe[\"variable_selection\"].sparsity_mask_\ncoef = pipe[\"regressor\"].coef_.T\n\n# Construct full coefficient vector\nw_hat = np.zeros_like(w)\nw_hat[sparsity_mask] = coef\n\nR2 = r2_score(y, yhat)\n\n# Print performance metrics\nprint(f\"Number variables: {len(sparsity_mask)}\")\nprint(f\"Number of chosen variables: {sparsity_mask.sum()}\")\nprint(f\"R^2: {R2}, best possible R^2 = {R2_best}\")",
"Number variables: 240\nNumber of chosen variables: 144\nR^2: 0.9278329616431523, best possible R^2 = 0.9394648554757948\n"
]
],
[
[
"Visualise regression coefficients\n---------------------------------\n\n",
"_____no_output_____"
]
],
[
[
"for i in range(w.shape[1]):\n plt.figure()\n plt.plot(w[:, i], \".\", label=\"True weights\")\n plt.plot(w_hat[:, i], \".\", label=\"Estimated weights\")\n\nplt.figure()\nplt.plot([w.min(), w.max()], [coef.min(), coef.max()], \"gray\")\nplt.scatter(w, w_hat, s=10)\nplt.ylabel(\"Learned coefficients\")\nplt.xlabel(\"True coefficients\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb59a74804b76a7d8222a848129c3365683dacae | 28,656 | ipynb | Jupyter Notebook | notebooks/create_datasets.ipynb | khoadoan/adversarial-hashing | eeeeb464b4fe7084efd38a6257499d674a9a7194 | [
"Apache-2.0"
] | 6 | 2020-06-26T09:51:07.000Z | 2021-09-09T09:48:55.000Z | notebooks/create_datasets.ipynb | khoadoan/adversarial-hashing | eeeeb464b4fe7084efd38a6257499d674a9a7194 | [
"Apache-2.0"
] | null | null | null | notebooks/create_datasets.ipynb | khoadoan/adversarial-hashing | eeeeb464b4fe7084efd38a6257499d674a9a7194 | [
"Apache-2.0"
] | 1 | 2020-07-21T19:34:38.000Z | 2020-07-21T19:34:38.000Z | 39.094134 | 139 | 0.539259 | [
[
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport sys\nimport shutil\nsys.path.append('../code/')\nsys.path.append('../python/')",
"_____no_output_____"
],
[
"from pprint import pprint\nfrom os import path\nimport scipy\nimport os\nfrom matplotlib import pyplot as plt\nfrom tqdm import tqdm\nfrom argparse import Namespace\nimport pickle\nimport seaborn as sns\n\nimport torchvision\nimport torchvision.transforms as transforms\n\nfrom sklearn.model_selection import train_test_split\n# import seaborn as sns\nimport numpy as np\n# import pandas as pd\nimport scipy\nimport torch.utils.data\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nfrom metrics import ranking\n# from sh import sh\nimport data",
"_____no_output_____"
],
[
"def get_numpy_data(dataloader):\n x, y = [], []\n for batch_x, batch_y in tqdm(iter(dataloader)):\n x.append(batch_x.numpy())\n y.append(batch_y.numpy())\n x = np.vstack(x)\n y = np.concatenate(y)\n \n return x, y\n\ndef create_hashgan_train_test(x, y, db_size, query_size):\n train_x, query_x, train_y, query_y = train_test_split(x, y, test_size = query_size, stratify = y)\n train_x, db_x, train_y, db_y = train_test_split(train_x, train_y, test_size = db_size, stratify = train_y)\n \n return train_x, train_y, query_x, query_y, db_x, db_y\n\ndef create_train_test(x, y, query_size):\n \"\"\"Train and DB are using the same dataset: gallery\"\"\"\n train_x, query_x, train_y, query_y = train_test_split(x, y, test_size = query_size, stratify = y)\n \n return train_x, train_y, query_x, query_y, train_x, train_y\n\ndef get_cifar10_data(image_size, batch_size, dataroot='../data/', workers=2, data_transforms=None):\n if data_transforms is None:\n data_transforms = transforms.Compose([\n transforms.Scale(image_size),\n transforms.ToTensor()\n # transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\n train_dataset = dset.CIFAR10(root=dataroot, download=True, train=True, transform=data_transforms)\n train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,\n shuffle=False, num_workers=workers)\n test_dataset = dset.CIFAR10(root=dataroot, download=True, train=False, transform=data_transforms)\n test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,\n shuffle=False, num_workers=workers)\n \n return train_dataloader, test_dataloader\n\ndef get_places365_dataloaders(image_size, batch_size, dataroot, workers=2, data_transforms=None):\n if data_transforms is None:\n data_transforms = transforms.Compose([\n transforms.Resize(image_size),\n transforms.ToTensor()\n ])\n \n train_dataloader = torch.utils.data.DataLoader(dset.ImageFolder(\n root=path.join(dataroot, 'train'),\n transform=data_transforms\n ), \n batch_size=batch_size, shuffle=False, num_workers=workers)\n \n valid_dataloader = torch.utils.data.DataLoader(dset.ImageFolder(\n root=path.join(dataroot, 'val'),\n transform=data_transforms\n ), \n batch_size=batch_size, shuffle=False, num_workers=workers)\n \n \n \n return train_dataloader, valid_dataloader\n\ndef get_mnist_data(image_size, batch_size, dataroot='../data/', workers=2, data_transforms=None):\n if data_transforms is None:\n data_transforms = transforms.Compose([\n transforms.Scale(image_size),\n transforms.ToTensor(),\n transforms.Normalize((0.5, ), (0.5, )),\n ])\n train_dataset = dset.MNIST(root=dataroot, download=True, train=True, transform=data_transforms)\n train_x, train_y = get_numpy_data(torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,\n shuffle=False, num_workers=workers))\n test_dataset = dset.MNIST(root=dataroot, download=True, train=False, transform=data_transforms)\n test_x, test_y = get_numpy_data(torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,\n shuffle=False, num_workers=workers))\n \n x = np.vstack([train_x, test_x])\n y = np.concatenate([train_y, test_y])\n return x, y\n\ndef get_mnist_3c_data(image_size, batch_size, dataroot='../data/', workers=2, data_transforms=None):\n if data_transforms is None:\n data_transforms = transforms.Compose([\n transforms.Scale(image_size),\n transforms.Grayscale(3),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\n train_dataset = dset.MNIST(root=dataroot, download=True, train=True, transform=data_transforms)\n train_x, train_y = get_numpy_data(torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,\n shuffle=False, num_workers=workers))\n test_dataset = dset.MNIST(root=dataroot, download=True, train=False, transform=data_transforms)\n test_x, test_y = get_numpy_data(torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,\n shuffle=False, num_workers=workers))\n \n x = np.vstack([train_x, test_x])\n y = np.concatenate([train_y, test_y])\n return x, y\n\ndef get_flickr_data(image_size, dataroot='../data/Flickr25K', workers=2, data_transforms=None):\n data_transforms = transforms.Compose([\n transforms.Scale(image_size),\n transforms.ToTensor(),\n transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0))])\n dataset = torchvision.datasets.ImageFolder(dataroot, transform=data_transforms)\n\n loader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True, num_workers=0)\n \n test_x, test_y = get_numpy_data(loader)\n \n x = np.vstack([train_x, test_x])\n y = np.concatenate([train_y, test_y])\n return x, y",
"_____no_output_____"
],
[
"def sample_files_from_list(basedir, file_list, n_per_class, seed, ignored_file_list=set()):\n sampled_files = {}\n permuted_indices = np.arange(len(file_list))\n print('Setting seed {}'.format(seed))\n np.random.seed(seed)\n np.random.shuffle(permuted_indices)\n selected_files = []\n for idx in tqdm(permuted_indices):\n filename = file_list[idx]\n if filename not in ignored_file_list:\n _, label, img_filename = filename.split('/')\n if label not in sampled_files:\n sampled_files[label] = []\n\n if len(sampled_files[label]) < n_per_class:\n sampled_files[label].append((img_filename, path.join(basedir, filename)))\n selected_files.append(filename)\n for label, img_list in sampled_files.items():\n assert len(img_list) == n_per_class\n return sampled_files, selected_files\n\ndef sample_train_db_data_from_dataloader(dataloader, num_train, num_db, seed):\n x, y = get_numpy_data(dataloader)\n assert (num_train + num_db) == x.shape[0]\n \n print('Setting seed {}'.format(seed))\n train_x, db_x, train_y, db_y = train_test_split(x, y, train_size = num_train, random_state=seed, stratify = y)\n \n return train_x, train_y, db_x, db_y ",
"_____no_output_____"
],
[
"def make_dir_if_not_exist(folder):\n if not path.exists(folder):\n # print('Creating folder: {}'.format(folder))\n os.makedirs(folder)\n \ndef create_dataset_from_files(basedir, sampled_files):\n if path.exists(basedir):\n raise Exception('Directory already exists: {}'.format(basedir))\n pbar = tqdm(sampled_files.items())\n cnt = 0\n try:\n for label, img_list in pbar :\n label_dir = path.join(basedir, label)\n make_dir_if_not_exist(label_dir)\n\n for img_filename, img_path in img_list:\n cnt += 1\n shutil.copyfile(img_path, path.join(label_dir, img_filename))\n if cnt %500 == 0:\n pbar.set_postfix(file_cnt=cnt)\n pbar.set_postfix(file_cnt=cnt)\n finally:\n pbar.close()\n \ndef check_evenly_sampling(a):\n cnts = np.sum(ranking.one_hot_label(a), axis=0)\n for cnt in cnts:\n assert cnt == cnts[0]",
"_____no_output_____"
],
[
"IMAGE_SIZE = 64",
"_____no_output_____"
]
],
[
[
"# MNIST-3C\n\nMNIST data with 3 channels (stacking the same copy of the 1-channel)",
"_____no_output_____"
]
],
[
[
"all_x, all_y = get_mnist_3c_data(IMAGE_SIZE, 100, dataroot='../data/', workers=0)\ndataset = 'mnist-3c'\nNUM_IMAGES = all_x.shape[0]\nprint('Dataset: {} images'.format(NUM_IMAGES))\nprint('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))",
"_____no_output_____"
],
[
"# DCW-AE paper\nfor seed, num_query in [\n (9, 10000), \n (19, 10000), \n (29, 10000),\n (39, 10000),\n (49, 10000)\n ]:\n num_train = num_db = NUM_IMAGES - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n\n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, db_y = train_x, train_y\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
],
[
"# This is used in DistillHash, SSDH papers\nfor seed, num_train, num_query in [\n (109, 5000, 10000), \n (119, 5000, 10000), \n (129, 5000, 10000),\n (139, 5000, 10000),\n (149, 5000, 10000),\n ]:\n num_db = NUM_IMAGES - num_train - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n\n \n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, query_x, db_y, query_y = train_test_split(\n query_x, query_y, train_size = num_db, random_state=seed, stratify = query_y)\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
]
],
[
[
"# MNIST",
"_____no_output_____"
]
],
[
[
"all_x, all_y = get_mnist_data(IMAGE_SIZE, 100, dataroot='../data/', workers=0)\ndataset = 'mnist'\nNUM_IMAGES = all_x.shape[0]\nprint('Dataset: {} images'.format(NUM_IMAGES))\nprint('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))",
"_____no_output_____"
],
[
"# DCW-AE paper\nfor seed, num_query in [\n (9, 10000), \n (19, 10000), \n (29, 10000),\n (39, 10000),\n (49, 10000)\n ]:\n num_train = num_db = NUM_IMAGES - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n\n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, db_y = train_x, train_y\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
],
[
"# This is used in DistillHash, SSDH papers\nfor seed, num_train, num_query in [\n (109, 5000, 10000), \n (119, 5000, 10000), \n (129, 5000, 10000),\n (139, 5000, 10000),\n (149, 5000, 10000),\n ]:\n num_db = NUM_IMAGES - num_train - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n\n \n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, query_x, db_y, query_y = train_test_split(\n query_x, query_y, train_size = num_db, random_state=seed, stratify = query_y)\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
]
],
[
[
"# Flickr25k",
"_____no_output_____"
]
],
[
[
"dataset = 'flickr25k'\nimage_size=IMAGE_SIZE\ndataroot='../data/Flickr25K/'\nworkers=0\ndata_transforms = transforms.Compose([\n transforms.Resize(image_size),\n transforms.CenterCrop(image_size),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\nloader = torch.utils.data.DataLoader(torchvision.datasets.ImageFolder(dataroot, transform=data_transforms), \n batch_size=100, shuffle=True, num_workers=0)\n\nall_x, all_y = get_numpy_data(loader)",
"_____no_output_____"
],
[
"NUM_IMAGES = all_x.shape[0]\nprint('Dataset: {} images'.format(NUM_IMAGES))\nprint('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))",
"_____no_output_____"
],
[
"# DCW-AE paper\nfor seed, num_query in [\n (9, 5000), \n (19, 5000), \n (29, 5000),\n (39, 5000),\n (49, 5000)\n ]:\n num_train = num_db = NUM_IMAGES - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n\n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, db_y = train_x, train_y\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
]
],
[
[
"# CIFAR-10",
"_____no_output_____"
]
],
[
[
"dataset = 'cifar10'\n\ntrain_dataloader, query_dataloader = get_cifar10_data(IMAGE_SIZE, 100, dataroot='../data/', workers=0)\ntrain_x, train_y = get_numpy_data(train_dataloader)\nquery_x, query_y = get_numpy_data(query_dataloader)\nall_x = np.vstack([train_x, query_x])\nall_y = np.concatenate([train_y, query_y])\nNUM_IMAGES = all_x.shape[0]\nprint('Dataset: {} images'.format(NUM_IMAGES))\nprint('Data range: [{}, {}]'.format(all_x.min(), all_x.max()))",
"_____no_output_____"
],
[
"# DCW-AE paper\nfor seed, num_query in [\n (9, 10000), \n (19, 10000), \n (29, 10000),\n (39, 10000),\n (49, 10000)\n ]:\n num_train = num_db = NUM_IMAGES - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n \n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, db_y = train_x, train_y\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
],
[
"# This is used in DistillHash, SSDH papers\nfor seed, num_train, num_query in [\n (109, 5000, 10000), \n (119, 5000, 10000), \n (129, 5000, 10000),\n (139, 5000, 10000),\n (149, 5000, 10000),\n ]:\n num_db = NUM_IMAGES - num_train - num_query\n output_dir = '../data/{}_isize{}_seed{}'.format(dataset, IMAGE_SIZE, seed)\n\n print('Setting seed {}: {} train, {} query, {} db'.format(seed, num_train, num_query, num_db))\n if path.exists(output_dir):\n print('Deleting existing folder: {}'.format(output_dir))\n shutil.rmtree(output_dir)\n print('Will save in {}'.format(output_dir))\n os.makedirs(output_dir)\n\n \n train_x, query_x, train_y, query_y = train_test_split(\n all_x, all_y, train_size = num_train, random_state=seed, stratify = all_y)\n db_x, query_x, db_y, query_y = train_test_split(\n query_x, query_y, train_size = num_db, random_state=seed, stratify = query_y)\n\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'query')), x = query_x, y=query_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'train')), x = train_x, y=train_y)\n np.savez_compressed(path.join(output_dir, '{}_{}_manual_{}.npz'.format(dataset, IMAGE_SIZE, 'db')), x = db_x, y=db_y)",
"_____no_output_____"
]
],
[
[
"# END",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
cb59a8d64fe1bbe5a8d0a44be241b8ce98bbdaf5 | 10,141 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Pandas Assignment-checkpoint.ipynb | a-braham/ADS-Assignment-1 | 8ffb0460c069dc9dfd8fafa6d4b3853fd1f21cae | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Pandas Assignment-checkpoint.ipynb | a-braham/ADS-Assignment-1 | 8ffb0460c069dc9dfd8fafa6d4b3853fd1f21cae | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Pandas Assignment-checkpoint.ipynb | a-braham/ADS-Assignment-1 | 8ffb0460c069dc9dfd8fafa6d4b3853fd1f21cae | [
"MIT"
] | null | null | null | 21.808602 | 197 | 0.516813 | [
[
[
"## Pandas\n\n### Instructions\n\nThis assignment will be done completely inside this Jupyter notebook with answers placed in the cell provided.\n\nAll python imports that are needed shown.\n\nFollow all the instructions in this notebook to complete these tasks. \n\nMake sure the CSV data files is in the same folder as this notebook - alumni.csv, groceries.csv",
"_____no_output_____"
]
],
[
[
"# Imports needed to complete this assignment\nimport pandas as pd\nimport seaborn as sns\n\nimport matplotlib.pyplot as plt \n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Question 1 : Import CSV file (1 Mark)\n\n\nWrite code to load the alumni csv dataset into a Pandas DataFrame called 'alumni'.\n",
"_____no_output_____"
]
],
[
[
"#q1 (1)\nalumni = pd.read_csv('alumni.csv')\nalumni",
"_____no_output_____"
]
],
[
[
"### Question 2 : Understand the data set (5 Marks)\n\nUse the following pandas commands to understand the data set: a) head, b) tail, c) dtypes, d) info, e) describe ",
"_____no_output_____"
]
],
[
[
"#a) (1)\nalumni.head()",
"_____no_output_____"
],
[
"#b) (1)\nalumni.tail()",
"_____no_output_____"
],
[
"#c) (1)\nalumni.dtypes",
"_____no_output_____"
],
[
"#d) (1)\nalumni.info()",
"_____no_output_____"
],
[
"#e) (1)\nalumni.describe()",
"_____no_output_____"
]
],
[
[
"### Question 3 : Cleaning the data set - part A (3 Marks)\n\na) Use clean_currency method below to strip out commas and dollar signs from Savings ($) column and put into a new column called 'Savings'.",
"_____no_output_____"
]
],
[
[
"def clean_currency(curr):\n return float(curr.replace(\",\", \"\").replace(\"$\", \"\"))\n\nclean_currency(\"$66,000\")\n ",
"_____no_output_____"
],
[
"#a) (2)\nsavings = []\nfor saving in alumni[\"Savings ($)\"]:\n savings.append(clean_currency(saving))\nalumni[\"Savings\"]=savings\nalumni",
"_____no_output_____"
]
],
[
[
"b) Uncomment 'alumni.dtypes.Savings' to check that the type change has occurred",
"_____no_output_____"
]
],
[
[
"#b) (1)\nalumni.dtypes.Savings",
"_____no_output_____"
]
],
[
[
"### Question 4 : Cleaning the data set - part B (5 Marks)\n\na) Run the 'alumni[\"Gender\"].value_counts()' to see the incorrect 'M' fields that need to be converted to 'Male'",
"_____no_output_____"
]
],
[
[
"# a) (1)\nalumni[\"Gender\"].value_counts()",
"_____no_output_____"
]
],
[
[
"b) Now use a '.str.replace' on the 'Gender' column to covert the incorrect 'M' fields. Hint: We must use ^...$ to restrict the pattern to match the whole string. ",
"_____no_output_____"
]
],
[
[
"# b) (1)\ngender = alumni[\"Gender\"].str.replace('(^M$)','Male', regex=True)",
"_____no_output_____"
],
[
"# b) (1)\ngender.value_counts()",
"_____no_output_____"
]
],
[
[
"c) That didn't the set alumni[\"Gender\"] column however. You will need to update the column when using the replace command 'alumni[\"Gender\"]=<replace command>', show how this is done below",
"_____no_output_____"
]
],
[
[
"# c) (1)\nalumni[\"Gender\"] = alumni[\"Gender\"].str.replace('(^M$)','Male', regex=True)\nalumni",
"_____no_output_____"
]
],
[
[
"d) You can set it directly by using the df.loc command, show how this can be done by using the 'df.loc[row_indexer,col_indexer] = value' command to convert the 'M' to 'Male'",
"_____no_output_____"
]
],
[
[
"# d) (1)\nalumni.loc[alumni['Gender'] == 'M'] = 'Male'",
"_____no_output_____"
]
],
[
[
"e) Now run the 'value_counts' for Gender again to see the correct columns - 'Male' and 'Female' ",
"_____no_output_____"
]
],
[
[
"# e) (1)\nalumni[\"Gender\"].value_counts()",
"_____no_output_____"
]
],
[
[
"### Question 5 : Working with the data set (4)\n\na) get the median, b) mean and c) standard deviation for the 'Salary' column",
"_____no_output_____"
]
],
[
[
"# a)(1)\nalumni[\"Salary\"].median()",
"_____no_output_____"
],
[
"# b)(1)\nalumni[\"Salary\"].mean()",
"_____no_output_____"
],
[
"# c)(1)\nalumni[\"Salary\"].std()",
"_____no_output_____"
]
],
[
[
"d) identify which alumni paid more than $15000 in fees, using the 'Fee' column",
"_____no_output_____"
]
],
[
[
"# d) (1)\nalumni[alumni[\"Fee\"] > 15000]",
"_____no_output_____"
]
],
[
[
"### Question 6 : Visualise the data set (4 Marks)\n\na) Using the 'Diploma Type' column, plot a bar chart and show its value counts.",
"_____no_output_____"
]
],
[
[
"#a) (1)\ndiploma_type = alumni.groupby('Diploma Type')\ndiplomas=[ diploma for diploma, data in diploma_type]\nvalue_counts=alumni['Diploma Type'].value_counts()\nplt.bar(diplomas,value_counts)\nplt.xlabel(\"Diploma Type\")\nplt.ylabel(\"Diploma Level\")\nplt.title(\"Diploma level by type\")\nplt.xticks(diplomas,rotation='vertical',size=8)\nplt.show",
"_____no_output_____"
]
],
[
[
"b) Now create a box plot comparison between 'Savings' and 'Salary' columns",
"_____no_output_____"
]
],
[
[
"#b) (1)\nsns.boxplot(x=\"Savings\", y=\"Salary\", data=alumni, orient=\"h\")",
"_____no_output_____"
]
],
[
[
"c) Generate a histogram with the 'Salary' column and use 12 bins.",
"_____no_output_____"
]
],
[
[
"#c) (1)\nplt.hist(alumni[\"Salary\"], bins=12, histtype='bar', rwidth=0.8)\nplt.xlabel('Salary')\nplt.ylabel('Counts')\nplt.title('Salary Comparision')\nplt.show()",
"_____no_output_____"
]
],
[
[
"d) Generate a scatter plot comparing 'Salary' and 'Savings' columns.",
"_____no_output_____"
]
],
[
[
"#d) (1)\nplt.scatter(alumni[\"Salary\"], alumni[\"Savings\"], label='Salary vs Savings')\n\nplt.xlabel('Salary')\nplt.ylabel('Savings')\nplt.title('Salary vs Savings')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Question 7 : Contingency Table (2 Marks)\n\nUsing both the 'Martial Status' and 'Defaulted' create a contingency table. Hint: crosstab",
"_____no_output_____"
]
],
[
[
"# Q7 (2)\npd.crosstab(alumni[\"Marital Status\"], alumni[\"Defaulted\"])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb59a94cb3dcc4d998b1046ad9096e3d3b146365 | 16,557 | ipynb | Jupyter Notebook | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/Conditional_Expectation_Projection.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/Conditional_Expectation_Projection.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/Conditional_Expectation_Projection.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | 1 | 2021-11-05T07:48:26.000Z | 2021-11-05T07:48:26.000Z | 52.561905 | 717 | 0.588512 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb59aa7ba9b5b75db09d550e92b14cce37676bfe | 84,224 | ipynb | Jupyter Notebook | 6_Crossing data.ipynb | ylizama/world_cinema_festivals | c58eedac8a18051b03c42839980be8b458e1e1c4 | [
"Apache-2.0"
] | null | null | null | 6_Crossing data.ipynb | ylizama/world_cinema_festivals | c58eedac8a18051b03c42839980be8b458e1e1c4 | [
"Apache-2.0"
] | null | null | null | 6_Crossing data.ipynb | ylizama/world_cinema_festivals | c58eedac8a18051b03c42839980be8b458e1e1c4 | [
"Apache-2.0"
] | null | null | null | 53.714286 | 21,764 | 0.501211 | [
[
[
"%reload_ext cypher\nimport matplotlib\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set_style(\"darkgrid\")",
"_____no_output_____"
],
[
"LA = ['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Costa Rica', 'Cuba', 'Ecuador', 'El Salvador', 'French Guiana', 'Grenada', 'Guatemala', 'Guiana', 'Haiti', 'Honduras', 'Jamaica', 'Mexico', 'Nicaragua', 'Paraguay', 'Panama', 'Peru', 'Puerto Rico', 'Dominican Republic', 'Surinam', 'Uruguay', 'Venezuela']",
"_____no_output_____"
],
[
"films_in_festivals = pd.read_csv(\"data/films_in_festival.csv\", sep=\"|\")\nfilms_with_funding = pd.read_csv(\"data/films_with_funding.csv\", sep=\"|\")\nfilms_with_funding = films_with_funding[['title', 'year', 'grants']]\nres = pd.merge(films_in_festivals, films_with_funding, on='title')",
"_____no_output_____"
]
],
[
[
"## Datos totales",
"_____no_output_____"
]
],
[
[
"print ('Number of films that received funding: ', len(films_with_funding))\nprint ('Number of those that participated in festivals: ', len(res))\nprint ('Number of those that won an award: ', len(res[res.award.notnull()]))",
"Number of films that received funding: 657\nNumber of those that participated in festivals: 257\nNumber of those that won an award: 32\n"
]
],
[
[
"## Datos de Latinoamerica",
"_____no_output_____"
]
],
[
[
"la_films_with_funding = pd.read_csv(\"data/films_with_funding.csv\", sep=\"|\")\nla_films_with_funding = la_films_with_funding[la_films_with_funding.country.isin(LA)]\nla_res = res[res.country.isin(LA)]",
"_____no_output_____"
],
[
"print ('Number of films that received funding: ', len(la_films_with_funding))\nprint ('Number of those that participated in festivals: ', len(la_res))\nprint ('Number of those that won an award: ', len(la_res[la_res.award.notnull()]))",
"Number of films that received funding: 223\nNumber of those that participated in festivals: 69\nNumber of those that won an award: 10\n"
],
[
"la_res",
"_____no_output_____"
]
],
[
[
"## Distribution per year",
"_____no_output_____"
]
],
[
[
"df_year = la_films_with_funding.groupby('year', sort=False).agg({'title':'count'})\ndf_year.columns = ['Number of films that received funding']\n\ndf_yearla= la_res.groupby('year_y', sort=False).agg({'title':'count'})\ndf_yearla.columns = ['Number of films in festival']\n\ndf_yearla_award = la_res[la_res.award.notnull()]\ndf_yearla_award = df_yearla_award.groupby('year_y', sort=False).agg({'title':'count'})\ndf_yearla_award.columns = ['Number of awarded films']\n\nresult = pd.concat([df_year, df_yearla, df_yearla_award], axis=1)\nresult.sort_index()\n",
"_____no_output_____"
],
[
"result = result.sort_index()\nresult.plot(rot = 0, kind='bar', figsize=(12,5))",
"_____no_output_____"
],
[
"result = result.sort_index()\nresult['Effectiveness'] = round(result['Number of films in festival'] / result['Number of films that received funding'], 2)\nresult['Efficiency'] = round(result['Number of awarded films'] / result['Number of films in festival'], 2)\n",
"_____no_output_____"
],
[
"result",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb59ab8565e33fdc23126132a88d5381130b4837 | 10,094 | ipynb | Jupyter Notebook | notebooks/202105252346-julia-basic-causal-and-probabilistic-inference.ipynb | lykmapipo/data-science-learning | a8d07147b8761a60fafc30e7bdf68d9d4ef93602 | [
"MIT"
] | 5 | 2021-05-09T08:45:22.000Z | 2021-09-17T09:21:58.000Z | notebooks/202105252346-julia-basic-causal-and-probabilistic-inference.ipynb | lykmapipo/data-science-learning | a8d07147b8761a60fafc30e7bdf68d9d4ef93602 | [
"MIT"
] | null | null | null | notebooks/202105252346-julia-basic-causal-and-probabilistic-inference.ipynb | lykmapipo/data-science-learning | a8d07147b8761a60fafc30e7bdf68d9d4ef93602 | [
"MIT"
] | 1 | 2021-06-05T07:13:54.000Z | 2021-06-05T07:13:54.000Z | 30.041667 | 322 | 0.518427 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb59c56f11e5cd5700a0f4358c402e0a2204c6e9 | 86,091 | ipynb | Jupyter Notebook | day3/sql_scavenger_hunt_day3_notebook.ipynb | mankutimma/sql-scavenger-hunt | 626c8962b032aac3f2b6649001a14f173f3be10b | [
"Apache-2.0"
] | null | null | null | day3/sql_scavenger_hunt_day3_notebook.ipynb | mankutimma/sql-scavenger-hunt | 626c8962b032aac3f2b6649001a14f173f3be10b | [
"Apache-2.0"
] | null | null | null | day3/sql_scavenger_hunt_day3_notebook.ipynb | mankutimma/sql-scavenger-hunt | 626c8962b032aac3f2b6649001a14f173f3be10b | [
"Apache-2.0"
] | null | null | null | 213.096535 | 18,839 | 0.690781 | [
[
[
"<table>\n <tr>\n <td>\n <center>\n <font size=\"+1\">If you haven't used BigQuery datasets on Kaggle previously, check out the <a href = \"https://www.kaggle.com/rtatman/sql-scavenger-hunt-handbook/\">Scavenger Hunt Handbook</a> kernel to get started.</font>\n </center>\n </td>\n </tr>\n</table>\n\n___ \n\n## Previous days:\n\n* [**Day 1:** SELECT, FROM & WHERE](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-1/)\n* [**Day 2:** GROUP BY, HAVING & COUNT()](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-2/)\n\n____\n",
"_____no_output_____"
],
[
"# ORDER BY (and Dates!)\n\nSo far in our scavenger hunt, we've learned how to use the following clauses: \n \n SELECT ... \n FROM ...\n (WHERE) ...\n GROUP BY ...\n (HAVING) ...\nWe also learned how to use the COUNT() aggregate function and, if you did the optional extra credit, possibly other aggregate functions as well. (If any of this is sounds unfamiliar to you, you can check out the earlier two days using the links above.)\n\nToday we're going to learn how change the order that data is returned to us using the ORDER BY clause. We're also going to talk a little bit about how to work with dates in SQL, because they're sort of their own thing and can lead to headaches if you're unfamiliar with them.\n\n\n### ORDER BY\n___\n\nFirst, let's learn how to use ORDER BY. ORDER BY is usually the last clause you'll put in your query, since you're going to want to use it to sort the results returned by the rest of your query.\n\nWe're going to be making queries against this version of the table we've been using an example over the past few days. \n\n> **Why would the order of a table change?** This can actually happen to active BigQuery datasets, since if your table is being added to regularly [it may be coalesced every so often and that will change the order of the data in your table](https://stackoverflow.com/questions/16854116/the-order-of-records-in-a-regularly-updated-bigquery-databaseg). \n\nYou'll notice that, unlike in earlier days, our table is no longer sorted by the ID column. \n\n. \n\n** Ordering by a numeric column**\n\nWhen you ORDER BY a numeric column, by default the column will be sorted from the lowest to highest number. So this query will return the ID, Name and Animal columns, all sorted by the number in the ID column. The row with the lowest number in the ID column will be returned first.\n\n SELECT ID, Name, Animal\n FROM `bigquery-public-data.pet_records.pets`\n ORDER BY ID\nVisually, this looks something like this:\n\n\n\n \n** Ordering by a text column**\n\nYou can also order by columns that have text in them. By default, the column you sort on will be sorted alphabetically from the beginning to the end of the alphabet.\n\n SELECT ID, Name, Animal\n FROM `bigquery-public-data.pet_records.pets`\n ORDER BY Animal\n\n\n** Reversing the order**\n\nYou can reverse the sort order (reverse alphabetical order for text columns or high to low for numeric columns) using the DESC argument. \n\n> ** DESC** is short for \"descending\", or high-to-low.\n\nSo this query will sort the selected columns by the Animal column, but the values that are last in alphabetic order will be returned first.\n\n SELECT ID, Name, Animal\n FROM `bigquery-public-data.pet_records.pets`\n ORDER BY Animal DESC\n\n \n### Dates\n____\n\nFinally, let's talk about dates. I'm including these because they are something that I found particularly confusing when I first learned SQL, and I ended up having to use them all. the. time. \n\nThere are two different ways that a date can be stored in BigQuery: as a DATE or as a DATETIME. Here's a quick summary:\n\n**DATE format**\n\nThe DATE format has the year first, then the month, and then the day. It looks like this:\n\n YYYY-[M]M-[D]D\n* YYYY: Four-digit year\n* [M]M: One or two digit month\n* [D]D: One or two digit day\n\n**DATETIME/TIMESTAMP format**\n\nThe DATETIME format is just like the date format... but with time added at the end. (The difference between DATETIME and TIMESTAMP is that the date and time information in a DATETIME is based on a specific timezone. On the other hand, a TIMESTAMP will be the same in all time zones, except for the time zone) . Both formats look like this:\n\n YYYY-[M]M-[D]D[( |T)[H]H:[M]M:[S]S[.DDDDDD]][time zone]\n* YYYY: Four-digit year\n* [M]M: One or two digit month\n* [D]D: One or two digit day\n* ( |T): A space or a T separator\n* [H]H: One or two digit hour (valid values from 00 to 23)\n* [M]M: One or two digit minutes (valid values from 00 to 59)\n* [S]S: One or two digit seconds (valid values from 00 to 59)\n* [.DDDDDD]: Up to six fractional digits (i.e. up to microsecond precision)\n* (TIMESTAMP only) [time zone]: String representing the time zone\n\n** Getting only part of a date **\n\nOften, though, you'll only want to look at part of a date, like the year or the day. You can do this using the EXTRACT function and specifying what part of the date you'd like to extract. \n\nSo this query will return one column with just the day of each date in the column_with_timestamp column: \n\n SELECT EXTRACT(DAY FROM column_with_timestamp)\n FROM `bigquery-public-data.imaginary_dataset.imaginary_table`\nOne of the nice things about SQL is that it's very smart about dates and we can ask for information beyond just extracting part of the cell. For example, this query will return one column with just the week in the year (between 1 and 53) of each date in the column_with_timestamp column: \n\n SELECT EXTRACT(WEEK FROM column_with_timestamp)\n FROM `bigquery-public-data.imaginary_dataset.imaginary_table`\nSQL has a lot of power when it comes to dates, and that lets you ask very specific questions using this information. You can find all the functions you can use with dates in BigQuery [on this page](https://cloud.google.com/bigquery/docs/reference/legacy-sql), under \"Date and time functions\". ",
"_____no_output_____"
],
[
"## Example: Which day of the week do the most fatal motor accidents happen on?\n___\n\nNow we're ready to work through an example. Today, we're going to be using the US Traffic Fatality Records database, which contains information on traffic accidents in the US where at least one person died. (It's definitely a sad topic, but if we can understand this data and the trends in it we can use that information to help prevent additional accidents.)\n\nFirst, just like yesterday, we need to get our environment set up. Since you already know how to look at schema information at this point, I'm going to let you do that on your own. \n\n> **Important note:** Make sure that you add the BigQuery dataset you're querying to your kernel. Otherwise you'll get ",
"_____no_output_____"
]
],
[
[
"# import package with helper functions \nimport bq_helper\n\n# create a helper object for this dataset\naccidents = bq_helper.BigQueryHelper(active_project=\"bigquery-public-data\",\n dataset_name=\"nhtsa_traffic_fatalities\")",
"_____no_output_____"
]
],
[
[
"We're going to look at which day of the week the most fatal traffic accidents happen on. I'm going to get the count of the unique id's (in this table they're called \"consecutive_number\") as well as the day of the week for each accident. Then I'm going sort my table so that the days with the most accidents are on returned first.",
"_____no_output_____"
]
],
[
[
"# query to find out the number of accidents which \n# happen on each day of the week\nquery = \"\"\"SELECT COUNT(consecutive_number), \n EXTRACT(DAYOFWEEK FROM timestamp_of_crash)\n FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`\n GROUP BY EXTRACT(DAYOFWEEK FROM timestamp_of_crash)\n ORDER BY COUNT(consecutive_number) DESC\n \"\"\"",
"_____no_output_____"
]
],
[
[
"Now that our query is ready, let's run it (safely!) and store the results in a dataframe: ",
"_____no_output_____"
]
],
[
[
"# the query_to_pandas_safe method will cancel the query if\n# it would use too much of your quota, with the limit set \n# to 1 GB by default\naccidents_by_day = accidents.query_to_pandas_safe(query)",
"_____no_output_____"
]
],
[
[
"And that gives us a dataframe! Let's quickly plot our data to make sure that it's actually been sorted:",
"_____no_output_____"
]
],
[
[
"# library for plotting\nimport matplotlib.pyplot as plt\n\n# make a plot to show that our data is, actually, sorted:\nplt.plot(accidents_by_day.f0_)\nplt.title(\"Number of Accidents by Rank of Day \\n (Most to least dangerous)\")",
"_____no_output_____"
]
],
[
[
"Yep, our query was, in fact, returned sorted! Now let's take a quick peek to figure out which days are the most dangerous:",
"_____no_output_____"
]
],
[
[
"print(accidents_by_day)",
" f0_ f1_\n0 5659 7\n1 5298 1\n2 4917 6\n3 4461 5\n4 4181 4\n5 4038 2\n6 3985 3\n"
]
],
[
[
"To map from the numbers returned for the day of the week (the second column) to the actual day, I consulted [the BigQuery documentation on the DAYOFWEEK function](https://cloud.google.com/bigquery/docs/reference/legacy-sql#dayofweek), which says that it returns \"an integer between 1 (Sunday) and 7 (Saturday), inclusively\". So we can tell, based on our query, that in 2015 most fatal motor accidents occur on Sunday and Saturday, while the fewest happen on Tuesday.",
"_____no_output_____"
],
[
"# Scavenger hunt\n___\n\nNow it's your turn! Here are the questions I would like you to get the data to answer:\n\n* Which hours of the day do the most accidents occur during?\n * Return a table that has information on how many accidents occurred in each hour of the day in 2015, sorted by the the number of accidents which occurred each hour. Use either the accident_2015 or accident_2016 table for this, and the timestamp_of_crash column. (Yes, there is an hour_of_crash column, but if you use that one you won't get a chance to practice with dates. :P)\n * **Hint:** You will probably want to use the [EXTRACT() function](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#extract_1) for this.\n* Which state has the most hit and runs?\n * Return a table with the number of vehicles registered in each state that were involved in hit-and-run accidents, sorted by the number of hit and runs. Use either the vehicle_2015 or vehicle_2016 table for this, especially the registration_state_name and hit_and_run columns.\n\nIn order to answer these questions, you can fork this notebook by hitting the blue \"Fork Notebook\" at the very top of this page (you may have to scroll up). \"Forking\" something is making a copy of it that you can edit on your own without changing the original.",
"_____no_output_____"
],
[
"**My code begins**",
"_____no_output_____"
],
[
"**Solution to question 1**",
"_____no_output_____"
],
[
"A quick peek into the accident_2015 table",
"_____no_output_____"
]
],
[
[
"# Your code goes here :)\n#accidents.table_schema(table_name=\"accident_2015\") #uncomment for more info",
"_____no_output_____"
],
[
"accidents.head(table_name=\"accident_2015\")",
"_____no_output_____"
],
[
"accidents.head(table_name=\"accident_2015\", selected_columns=[\"consecutive_number\", \"timestamp_of_crash\"])",
"_____no_output_____"
],
[
"#Which hours of the day do the most accidents occur during?\n\nquery1 = \"\"\"SELECT COUNT(consecutive_number), EXTRACT(HOUR FROM timestamp_of_crash)\n FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`\n GROUP BY EXTRACT(HOUR FROM timestamp_of_crash)\n ORDER BY COUNT(consecutive_number) DESC\n \"\"\"\n\naccidents_by_hour_df = accidents.query_to_pandas_safe(query=query1)\n\naccidents_by_hour_df.head(n=24)",
"_____no_output_____"
]
],
[
[
"So, the most accidents of the day occur in the 18th hour.",
"_____no_output_____"
]
],
[
[
"plt.plot(accidents_by_hour_df.f0_)\nplt.title(s=\"Number of accidents by Rank of hour \\n (Most to least dangerous)\")",
"_____no_output_____"
]
],
[
[
"**Solution to question 2**",
"_____no_output_____"
]
],
[
[
"#Which state has the most hit and runs?\n#accidents.table_schema(table_name=\"vehicle_2015\") #uncomment for more info",
"_____no_output_____"
],
[
"accidents.head(table_name=\"vehicle_2015\", \n selected_columns=[\"consecutive_number\", \"registration_state_name\", \"hit_and_run\"], \n num_rows=30)",
"_____no_output_____"
],
[
"query2 = \"\"\"SELECT COUNT(hit_and_run), registration_state_name\n FROM `bigquery-public-data.nhtsa_traffic_fatalities.vehicle_2015`\n WHERE hit_and_run = \"Yes\"\n GROUP BY registration_state_name\n ORDER BY COUNT(hit_and_run) DESC\n \"\"\"\n\nhit_and_run_statewise_df = accidents.query_to_pandas_safe(query=query2)\n\nhit_and_run_statewise_df.head(len(hit_and_run_statewise_df[\"f0_\"]))",
"_____no_output_____"
]
],
[
[
"California has the highest hit and runs(ignoring 'Unknown').",
"_____no_output_____"
],
[
"Please feel free to ask any questions you have in this notebook or in the [Q&A forums](https://www.kaggle.com/questions-and-answers)! \n\nAlso, if you want to share or get comments on your kernel, remember you need to make it public first! You can change the visibility of your kernel under the \"Settings\" tab, on the right half of your screen.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
cb59ddc4c77e4c9482a0da36686296d0e8166487 | 4,413 | ipynb | Jupyter Notebook | all_notebooks/16-Trees/Trees Interview Problems/Trim a Binary Search Tree - SOLUTION.ipynb | robjpar/pdsa | 735a4cc2a18a0f7f42eec143d272a138925708fa | [
"MIT"
] | null | null | null | all_notebooks/16-Trees/Trees Interview Problems/Trim a Binary Search Tree - SOLUTION.ipynb | robjpar/pdsa | 735a4cc2a18a0f7f42eec143d272a138925708fa | [
"MIT"
] | null | null | null | all_notebooks/16-Trees/Trees Interview Problems/Trim a Binary Search Tree - SOLUTION.ipynb | robjpar/pdsa | 735a4cc2a18a0f7f42eec143d272a138925708fa | [
"MIT"
] | null | null | null | 43.693069 | 1,122 | 0.649898 | [
[
[
"# Trim a Binary Search Tree - SOLUTION\n\n## Problem Statement\n\nGiven the root of a binary search tree and 2 numbers min and max, trim the tree such that all the numbers in the new tree are between min and max (inclusive). The resulting tree should still be a valid binary search tree. So, if we get this tree as input:\n___\n\n\n___\nand we’re given **min value as 5** and **max value as 13**, then the resulting binary search tree should be: \n___\n\n___\nWe should remove all the nodes whose value is not between min and max. \n\n___",
"_____no_output_____"
],
[
"## Solution\n\nWe can do this by performing a post-order traversal of the tree. We first process the left children, then right children, and finally the node itself. So we form the new tree bottom up, starting from the leaves towards the root. As a result while processing the node itself, both its left and right subtrees are valid trimmed binary search trees (may be NULL as well).\n\nAt each node we’ll return a reference based on its value, which will then be assigned to its parent’s left or right child pointer, depending on whether the current node is left or right child of the parent. If current node’s value is between min and max (min<=node<=max) then there’s no action need to be taken, so we return the reference to the node itself. If current node’s value is less than min, then we return the reference to its right subtree, and discard the left subtree. Because if a node’s value is less than min, then its left children are definitely less than min since this is a binary search tree. But its right children may or may not be less than min we can’t be sure, so we return the reference to it. Since we’re performing bottom-up post-order traversal, its right subtree is already a trimmed valid binary search tree (possibly NULL), and left subtree is definitely NULL because those nodes were surely less than min and they were eliminated during the post-order traversal. Remember that in post-order traversal we first process all the children of a node, and then finally the node itself.\n\nSimilar situation occurs when node’s value is greater than max, we now return the reference to its left subtree. Because if a node’s value is greater than max, then its right children are definitely greater than max. But its left children may or may not be greater than max. So we discard the right subtree and return the reference to the already valid left subtree. The code is easier to understand:",
"_____no_output_____"
]
],
[
[
"def trimBST(tree, minVal, maxVal): \n \n if not tree: \n return \n \n tree.left=trimBST(tree.left, minVal, maxVal) \n tree.right=trimBST(tree.right, minVal, maxVal) \n \n if minVal<=tree.val<=maxVal: \n return tree \n \n if tree.val<minVal: \n return tree.right \n \n if tree.val>maxVal: \n return tree.left ",
"_____no_output_____"
]
],
[
[
"The complexity of this algorithm is O(N), where N is the number of nodes in the tree. Because we basically perform a post-order traversal of the tree, visiting each and every node one. This is optimal because we should visit every node at least once. This is a very elegant question that demonstrates the effectiveness of recursion in trees. ",
"_____no_output_____"
],
[
"# Good Job!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb59e231917a0d321265ab060e2ba7ae76ba57a6 | 10,796 | ipynb | Jupyter Notebook | GHZGame/GHZGame.ipynb | vxfield/QuantumKatas | 4fd06ce5776164504725e564044241f155c4d9a1 | [
"MIT"
] | 1 | 2020-09-26T22:29:24.000Z | 2020-09-26T22:29:24.000Z | GHZGame/GHZGame.ipynb | FingerLeakers/QuantumKatas | 4fd06ce5776164504725e564044241f155c4d9a1 | [
"MIT"
] | null | null | null | GHZGame/GHZGame.ipynb | FingerLeakers/QuantumKatas | 4fd06ce5776164504725e564044241f155c4d9a1 | [
"MIT"
] | null | null | null | 30.757835 | 254 | 0.572342 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb59e5a638e4a90c284427d549f31674785689a0 | 4,321 | ipynb | Jupyter Notebook | tsne_plot.ipynb | nabaruns/DLNLP | fd3fb5d00308e6871ceebf60849102e6b90b7b27 | [
"MIT"
] | 1 | 2022-01-12T01:44:46.000Z | 2022-01-12T01:44:46.000Z | tsne_plot.ipynb | nabaruns/DLNLP | fd3fb5d00308e6871ceebf60849102e6b90b7b27 | [
"MIT"
] | null | null | null | tsne_plot.ipynb | nabaruns/DLNLP | fd3fb5d00308e6871ceebf60849102e6b90b7b27 | [
"MIT"
] | null | null | null | 17.565041 | 74 | 0.492479 | [
[
[
"%cd ../text_gcn_m",
"/4tb/nabarun/nlp/text_gcn_m\n"
],
[
"%matplotlib inline",
"_____no_output_____"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py R8",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py R52",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py mr",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py ohsumed",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py cora",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py pubmed",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py citeseer",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py 20ng",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py fakenews",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py buzzfeed",
"Figure(640x480)\r\n"
],
[
"!../gcn_text_categorization/venv/bin/python3 visualize.py politifact",
"Figure(640x480)\r\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb59e956009782871156cad1f9a422345cc225a2 | 299,080 | ipynb | Jupyter Notebook | 01_pairwise_correlations.ipynb | webartifex/ames-housing | ea9429097035d9440248a95eb927c739a0a58de9 | [
"MIT"
] | 2 | 2020-10-27T04:37:33.000Z | 2020-12-19T21:35:56.000Z | 01_pairwise_correlations.ipynb | webartifex/ames-housing | ea9429097035d9440248a95eb927c739a0a58de9 | [
"MIT"
] | 4 | 2020-03-24T16:17:45.000Z | 2020-06-29T20:25:35.000Z | 01_pairwise_correlations.ipynb | webartifex/ames-housing | ea9429097035d9440248a95eb927c739a0a58de9 | [
"MIT"
] | 3 | 2020-11-20T09:30:01.000Z | 2021-07-21T22:12:19.000Z | 123.230326 | 115,632 | 0.789357 | [
[
[
"# Pair-wise Correlations\n\nThe purpose is to identify predictor variables strongly correlated with the sales price and with each other to get an idea of what variables could be good predictors and potential issues with collinearity.\n\nFurthermore, Box-Cox transformations and linear combinations of variables are added where applicable or useful.",
"_____no_output_____"
],
[
"## \"Housekeeping\"",
"_____no_output_____"
]
],
[
[
"import warnings\nimport json\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nfrom sklearn.preprocessing import PowerTransformer\nfrom tabulate import tabulate\n\nfrom utils import (\n ALL_VARIABLES,\n CONTINUOUS_VARIABLES,\n DISCRETE_VARIABLES,\n NUMERIC_VARIABLES,\n ORDINAL_VARIABLES,\n TARGET_VARIABLES,\n encode_ordinals,\n load_clean_data,\n print_column_list,\n)",
"_____no_output_____"
],
[
"pd.set_option(\"display.max_columns\", 100)",
"_____no_output_____"
],
[
"sns.set_style(\"white\")",
"_____no_output_____"
]
],
[
[
"## Load the Data\n\nOnly a subset of the previously cleaned data is used in this analysis. In particular, it does not make sense to calculate correlations involving nominal variables.\n\nFurthermore, ordinal variables are encoded as integers (with greater values indicating a higher sales price by \"guts feeling\"; refer to the [data documentation](https://www.amstat.org/publications/jse/v19n3/decock/DataDocumentation.txt) to see the un-encoded values) and take part in the analysis.\n\nA `cleaned_df` DataFrame with the original data from the previous notebook is kept so as to restore the encoded ordinal labels again at the end of this notebook for correct storage.",
"_____no_output_____"
]
],
[
[
"cleaned_df = load_clean_data()",
"_____no_output_____"
],
[
"df = cleaned_df[NUMERIC_VARIABLES + ORDINAL_VARIABLES + TARGET_VARIABLES]\ndf = encode_ordinals(df)",
"_____no_output_____"
],
[
"df[NUMERIC_VARIABLES].head()",
"_____no_output_____"
],
[
"df[ORDINAL_VARIABLES].head()",
"_____no_output_____"
]
],
[
[
"## Linearly \"dependent\" Features",
"_____no_output_____"
],
[
"The \"above grade (ground) living area\" (= *Gr Liv Area*) can be split into 1st and 2nd floor living area plus some undefined rest.",
"_____no_output_____"
]
],
[
[
"assert not (\n df[\"Gr Liv Area\"]\n != (df[\"1st Flr SF\"] + df[\"2nd Flr SF\"] + df[\"Low Qual Fin SF\"])\n).any()",
"_____no_output_____"
]
],
[
[
"The various basement areas also add up.",
"_____no_output_____"
]
],
[
[
"assert not (\n df[\"Total Bsmt SF\"]\n != (df[\"BsmtFin SF 1\"] + df[\"BsmtFin SF 2\"] + df[\"Bsmt Unf SF\"])\n).any()",
"_____no_output_____"
]
],
[
[
"Calculate a variable for the total living area *Total SF* as this is the number communicated most often in housing ads.",
"_____no_output_____"
]
],
[
[
"df[\"Total SF\"] = df[\"Gr Liv Area\"] + df[\"Total Bsmt SF\"]\nnew_variables = [\"Total SF\"]\nCONTINUOUS_VARIABLES.append(\"Total SF\")",
"_____no_output_____"
]
],
[
[
"The different porch areas are unified into a new variable *Total Porch SF*. This potentially helps making the presence of a porch in general relevant in the prediction.",
"_____no_output_____"
]
],
[
[
"df[\"Total Porch SF\"] = (\n df[\"3Ssn Porch\"] + df[\"Enclosed Porch\"] + df[\"Open Porch SF\"]\n + df[\"Screen Porch\"] + df[\"Wood Deck SF\"]\n)\nnew_variables.append(\"Total Porch SF\")\nCONTINUOUS_VARIABLES.append(\"Total Porch SF\")",
"_____no_output_____"
]
],
[
[
"The various types of rooms \"above grade\" (i.e., *TotRms AbvGrd*, *Bedroom AbvGr*, *Kitchen AbvGr*, and *Full Bath*) do not add up (only in 29% of the cases they do). Therefore, no single unified variable can be used as a predictor.",
"_____no_output_____"
]
],
[
[
"round(\n 100\n * (\n df[\"TotRms AbvGrd\"]\n == (df[\"Bedroom AbvGr\"] + df[\"Kitchen AbvGr\"] + df[\"Full Bath\"])\n ).sum()\n / df.shape[0]\n)",
"_____no_output_____"
]
],
[
[
"Unify the number of various types of bathrooms into a single variable. Note that \"half\" bathrooms are counted as such.",
"_____no_output_____"
]
],
[
[
"df[\"Total Bath\"] = (\n df[\"Full Bath\"] + 0.5 * df[\"Half Bath\"]\n + df[\"Bsmt Full Bath\"] + 0.5 * df[\"Bsmt Half Bath\"]\n)\nnew_variables.append(\"Total Bath\")\nDISCRETE_VARIABLES.append(\"Total Bath\")",
"_____no_output_____"
]
],
[
[
"## Box-Cox Transformations\n\nOnly numeric columns with non-negative values are eligable for a Box-Cox transformation.",
"_____no_output_____"
]
],
[
[
"columns = CONTINUOUS_VARIABLES + TARGET_VARIABLES\ntransforms = df[columns].describe().T\ntransforms = list(transforms[transforms['min'] > 0].index)\nprint_column_list(transforms)",
"1st Flr SF First Floor square feet\nGr Liv Area Above grade (ground) living area square feet\nLot Area Lot size in square feet\nSalePrice\nTotal SF\n"
]
],
[
[
"A common convention is to use Box-Cox transformations only if the found lambda value (estimated with Maximum Likelyhood Estimation) is in the range from -3 to +3.\n\nConsequently, the only applicable transformation are for *SalePrice* and the new variable *Total SF*.",
"_____no_output_____"
]
],
[
[
"# Check the Box-Cox tranformations for each column seperately\n# to decide if the optimal lambda value is in an acceptable range.\noutput = []\ntransformed_columns = []\nfor column in transforms:\n X = df[[column]] # 2D array needed!\n pt = PowerTransformer(method=\"box-cox\", standardize=False)\n # Suppress a weird but harmless warning from scipy\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n pt.fit(X)\n # Check if the optimal lambda is ok.\n lambda_ = pt.lambdas_[0].round(1)\n if -3 <= lambda_ <= 3:\n lambda_label = 0 if lambda_ <= 0.01 else lambda_ # to avoid -0.0\n new_column = f\"{column} (box-cox-{lambda_label})\"\n df[new_column] = (\n np.log(X) if lambda_ <= 0.001 else (((X ** lambda_) - 1) / lambda_)\n )\n # Track the new column in the appropiate list.\n new_variables.append(new_column)\n if column in TARGET_VARIABLES:\n TARGET_VARIABLES.append(new_column)\n else:\n CONTINUOUS_VARIABLES.append(new_column)\n # To show only the transformed columns below.\n transformed_columns.append(column)\n transformed_columns.append(new_column)\n output.append((\n f\"{column}:\",\n f\"use lambda of {lambda_}\",\n ))\n else:\n output.append((\n f\"{column}:\",\n f\"lambda of {lambda_} not in realistic range\",\n ))\nprint(tabulate(sorted(output), tablefmt=\"plain\"))",
"1st Flr SF: use lambda of -0.0\nGr Liv Area: use lambda of -0.0\nLot Area: use lambda of 0.1\nSalePrice: use lambda of 0.0\nTotal SF: use lambda of 0.2\n"
],
[
"df[transformed_columns].head()",
"_____no_output_____"
]
],
[
[
"## Correlations\n\nThe pair-wise correlations are calculated based on the type of the variables:\n- **continuous** variables are assumed to be linearly related with the target and each other or not: use **Pearson's correlation coefficient**\n- **discrete** (because of the low number of distinct realizations as seen in the data cleaning notebook) and **ordinal** (low number of distinct realizations as well) variables are assumed to be related in a monotonic way with the target and each other or not: use **Spearman's rank correlation coefficient**\n\nFurthermore, for a **naive feature selection** a \"rule of thumb\" classification in *weak* and *strong* correlation is applied to the predictor variables. The identified variables will be used in the prediction modelling part to speed up the feature selection. A correlation between 0.33 and 0.66 is considered *weak* while a correlation above 0.66 is considered *strong* (these thresholds refer to the absolute value of the correlation). Correlations are calculated for **each** target variable (i.e., raw \"SalePrice\" and Box-Cox transformation thereof). Correlations below 0.1 are considered \"uncorrelated\".",
"_____no_output_____"
]
],
[
[
"strong = 0.66\nweak = 0.33\nuncorrelated = 0.1",
"_____no_output_____"
]
],
[
[
"Two heatmaps below (implemented in the reusable `plot_correlation` function) help visualize the correlations.\n\nObviously, many variables are pair-wise correlated. This could yield regression coefficients *inprecise* and not usable / interpretable. At the same time, this does not lower the predictive power of a model as a whole. In contrast to the pair-wise correlations, *multi-collinearity* is not checked here.",
"_____no_output_____"
]
],
[
[
"def plot_correlation(data, title):\n \"\"\"Visualize a correlation matrix in a nice heatmap.\"\"\"\n fig, ax = plt.subplots(figsize=(12, 12))\n ax.set_title(title, fontsize=24)\n # Blank out the upper triangular part of the matrix.\n mask = np.zeros_like(data, dtype=np.bool)\n mask[np.triu_indices_from(mask)] = True\n # Use a diverging color map.\n cmap = sns.diverging_palette(240, 0, as_cmap=True)\n # Adjust the labels' font size.\n labels = data.columns\n ax.set_xticklabels(labels, fontsize=10)\n ax.set_yticklabels(labels, fontsize=10)\n # Plot it.\n sns.heatmap(\n data, vmin=-1, vmax=1, cmap=cmap, center=0, linewidths=.5,\n cbar_kws={\"shrink\": .5}, square=True, mask=mask, ax=ax\n )",
"_____no_output_____"
]
],
[
[
"### Pearson\n\nPearson's correlation coefficient shows a linear relationship between two variables.",
"_____no_output_____"
]
],
[
[
"columns = CONTINUOUS_VARIABLES + TARGET_VARIABLES\npearson = df[columns].corr(method=\"pearson\")",
"_____no_output_____"
],
[
"plot_correlation(pearson, \"Pearson's Correlation\")",
"_____no_output_____"
]
],
[
[
"Predictors weakly or strongly correlated with a target variable are collected.",
"_____no_output_____"
]
],
[
[
"pearson_weakly_correlated = set()\npearson_strongly_correlated = set()\npearson_uncorrelated = set()\n# Iterate over the raw and transformed target.\nfor target in TARGET_VARIABLES:\n corrs = pearson.loc[target].drop(TARGET_VARIABLES).abs()\n pearson_weakly_correlated |= set(corrs[(weak < corrs) & (corrs <= strong)].index)\n pearson_strongly_correlated |= set(corrs[(strong < corrs)].index)\n pearson_uncorrelated |= set(corrs[(corrs < uncorrelated)].index)\n# Show that no contradiction exists between the classifications.\nassert pearson_weakly_correlated & pearson_strongly_correlated == set()\nassert pearson_weakly_correlated & pearson_uncorrelated == set()",
"_____no_output_____"
]
],
[
[
"Show the continuous variables that are weakly and strongly correlated with the sales price or uncorrelated.",
"_____no_output_____"
]
],
[
[
"print_column_list(pearson_uncorrelated)",
"3Ssn Porch Three season porch area in square feet\nBsmtFin SF 2 Type 2 finished square feet\nLow Qual Fin SF Low quality finished square feet (all floors)\nMisc Val $Value of miscellaneous feature\nPool Area Pool area in square feet\n"
],
[
"print_column_list(pearson_weakly_correlated)",
"1st Flr SF First Floor square feet\n1st Flr SF (box-cox-0)\nBsmtFin SF 1 Type 1 finished square feet\nGarage Area Size of garage in square feet\nLot Area (box-cox-0.1)\nMas Vnr Area Masonry veneer area in square feet\nTotal Bsmt SF Total square feet of basement area\nTotal Porch SF\nWood Deck SF Wood deck area in square feet\n"
],
[
"print_column_list(pearson_strongly_correlated)",
"Gr Liv Area Above grade (ground) living area square feet\nGr Liv Area (box-cox-0)\nTotal SF\nTotal SF (box-cox-0.2)\n"
]
],
[
[
"### Spearman\n\nSpearman's correlation coefficient shows an ordinal rank relationship between two variables.",
"_____no_output_____"
]
],
[
[
"columns = sorted(DISCRETE_VARIABLES + ORDINAL_VARIABLES) + TARGET_VARIABLES\nspearman = df[columns].corr(method=\"spearman\")",
"_____no_output_____"
],
[
"plot_correlation(spearman, \"Spearman's Rank Correlation\")",
"_____no_output_____"
]
],
[
[
"Predictors weakly or strongly correlated with a target variable are collected.",
"_____no_output_____"
]
],
[
[
"spearman_weakly_correlated = set()\nspearman_strongly_correlated = set()\nspearman_uncorrelated = set()\n# Iterate over the raw and transformed target.\nfor target in TARGET_VARIABLES:\n corrs = spearman.loc[target].drop(TARGET_VARIABLES).abs()\n spearman_weakly_correlated |= set(corrs[(weak < corrs) & (corrs <= strong)].index)\n spearman_strongly_correlated |= set(corrs[(strong < corrs)].index)\n spearman_uncorrelated |= set(corrs[(corrs < uncorrelated)].index)\n# Show that no contradiction exists between the classifications.\nassert spearman_weakly_correlated & spearman_strongly_correlated == set()\nassert spearman_weakly_correlated & spearman_uncorrelated == set()",
"_____no_output_____"
]
],
[
[
"Show the discrete and ordinal variables that are weakly and strongly correlated with the sales price or uncorrelated.",
"_____no_output_____"
]
],
[
[
"print_column_list(spearman_uncorrelated)",
"Bsmt Half Bath Basement half bathrooms\nBsmtFin Type 2 Rating of basement finished area (if multiple types)\nExter Cond Evaluates the present condition of the material on the exterior\nLand Slope Slope of property\nMo Sold Month Sold (MM)\nPool QC Pool quality\nUtilities Type of utilities available\nYr Sold Year Sold (YYYY)\n"
],
[
"print_column_list(spearman_weakly_correlated)",
"Bsmt Exposure Refers to walkout or garden level walls\nBsmtFin Type 1 Rating of basement finished area\nFireplace Qu Fireplace quality\nFireplaces Number of fireplaces\nFull Bath Full bathrooms above grade\nGarage Cond Garage condition\nGarage Finish Interior finish of the garage\nGarage Qual Garage quality\nHalf Bath Half baths above grade\nHeating QC Heating quality and condition\nLot Shape General shape of property\nPaved Drive Paved driveway\nTotRms AbvGrd Total rooms above grade (does not include bathrooms)\nYear Remod/Add Remodel date (same as construction date if no remodeling or additions)\n"
],
[
"print_column_list(spearman_strongly_correlated)",
"Bsmt Qual Evaluates the height of the basement\nExter Qual Evaluates the quality of the material on the exterior\nGarage Cars Size of garage in car capacity\nKitchen Qual Kitchen quality\nOverall Qual Rates the overall material and finish of the house\nTotal Bath\nYear Built Original construction date\n"
]
],
[
[
"## Save the Results",
"_____no_output_____"
],
[
"### Save the weakly and strongly correlated Variables\n\nThe subset of variables that have a correlation with the house price are saved in a simple JSON file for easy re-use.",
"_____no_output_____"
]
],
[
[
"with open(\"data/correlated_variables.json\", \"w\") as file:\n file.write(json.dumps({\n \"uncorrelated\": sorted(\n list(pearson_uncorrelated) + list(spearman_uncorrelated)\n ),\n \"weakly_correlated\": sorted(\n list(pearson_weakly_correlated) + list(spearman_weakly_correlated)\n ),\n \"strongly_correlated\": sorted(\n list(pearson_strongly_correlated) + list(spearman_strongly_correlated)\n ),\n }))",
"_____no_output_____"
]
],
[
[
"### Save the Data\n\nSort the new variables into the unprocessed `cleaned_df` DataFrame with the targets at the end. This \"restores\" the ordinal labels again for storage.",
"_____no_output_____"
]
],
[
[
"for column in new_variables:\n cleaned_df[column] = df[column]\nfor target in set(TARGET_VARIABLES) & set(new_variables):\n new_variables.remove(target)\ncleaned_df = cleaned_df[sorted(ALL_VARIABLES + new_variables) + TARGET_VARIABLES]",
"_____no_output_____"
]
],
[
[
"In totality, this notebook added two new linear combinations and one Box-Cox transformation to the previous 78 columns.",
"_____no_output_____"
]
],
[
[
"cleaned_df.shape",
"_____no_output_____"
],
[
"cleaned_df.head()",
"_____no_output_____"
],
[
"cleaned_df.to_csv(\"data/data_clean_with_transformations.csv\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb59f57d4c61c96c6dbab0899822d646b7f0220b | 192,055 | ipynb | Jupyter Notebook | _doc/notebooks/challenges/city_bike/business_chicago.ipynb | sdpython/ensae_projects | 9647751da053c09fa35402527b294e02a4e6e2ad | [
"MIT"
] | 1 | 2020-11-22T10:24:54.000Z | 2020-11-22T10:24:54.000Z | _doc/notebooks/challenges/city_bike/business_chicago.ipynb | sdpython/ensae_projects | 9647751da053c09fa35402527b294e02a4e6e2ad | [
"MIT"
] | 13 | 2017-11-20T00:20:45.000Z | 2021-01-05T14:13:51.000Z | _doc/notebooks/challenges/city_bike/business_chicago.ipynb | sdpython/ensae_projects | 9647751da053c09fa35402527b294e02a4e6e2ad | [
"MIT"
] | null | null | null | 356.317254 | 147,014 | 0.897524 | [
[
[
"# Chicago\n\nThis notebooks displays some of the data available at [Business Licenses - Current Active](https://data.cityofchicago.org/Community-Economic-Development/Business-Licenses-Current-Active/uupf-x98q). We assume the data was downloaded.",
"_____no_output_____"
]
],
[
[
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Data",
"_____no_output_____"
]
],
[
[
"from pyensae.datasource import download_data\nfile = download_data(\"rows.csv\", url=\"https://data.cityofchicago.org/api/views/uupf-x98q/\")",
"_____no_output_____"
]
],
[
[
"## Businesses",
"_____no_output_____"
]
],
[
[
"import pandas\nbusinesses = df = pandas.read_csv(\"rows.csv\", low_memory=False)\ndf.head()",
"_____no_output_____"
],
[
"df.plot(x=\"LONGITUDE\", y=\"LATITUDE\", kind=\"scatter\");",
"_____no_output_____"
],
[
"minlon, maxlon = df[\"LONGITUDE\"].min(), df[\"LONGITUDE\"].max()\nminlat, maxlat = df[\"LATITUDE\"].min(), df[\"LATITUDE\"].max()\nminlon, maxlon, minlat, maxlat",
"_____no_output_____"
],
[
"import cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(7, 7))\nax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())\nax.set_extent([minlon, maxlon, minlat, maxlat])\nax.add_feature(cfeature.OCEAN)\nax.add_feature(cfeature.COASTLINE)\nax.add_feature(cfeature.LAKES)\nax.add_feature(cfeature.LAND)\nax.add_feature(cfeature.RIVERS)\nax.add_feature(cfeature.BORDERS, linestyle=':')\n\nax.plot(df[\"LONGITUDE\"], df[\"LATITUDE\"], '.', ms=0.9);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb59fb28dd254dee55f404ef6079b63bab7ec192 | 551,235 | ipynb | Jupyter Notebook | MachineLearningIntro.ipynb | cjeffr/MachineLearningIrma | 3bedc4786ad1072f1f56bca2c6ff1e074f1c8da1 | [
"Apache-2.0"
] | null | null | null | MachineLearningIntro.ipynb | cjeffr/MachineLearningIrma | 3bedc4786ad1072f1f56bca2c6ff1e074f1c8da1 | [
"Apache-2.0"
] | null | null | null | MachineLearningIntro.ipynb | cjeffr/MachineLearningIrma | 3bedc4786ad1072f1f56bca2c6ff1e074f1c8da1 | [
"Apache-2.0"
] | null | null | null | 454.439406 | 302,556 | 0.936751 | [
[
[
"import os\nimport tarfile\nfrom six.moves import urllib\n\nDOWNLOAD_ROOT = 'https://raw.githubusercontent.com/ageron/handson-ml/master/'\nHOUSING_PATH = 'datasets/housing'\nHOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + '/housing.tgz'\n\ndef fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):\n if not os.path.isdir(housing_path):\n os.makedirs(housing_path)\n tgz_path = os.path.join(housing_path, 'housing.tgz')\n urllib.request.urlretrieve(housing_url, tgz_path)\n housing_tgz = tarfile.open(tgz_path)\n housing_tgz.extractall(path=housing_path)\n housing_tgz.close()\nfetch_housing_data() ",
"_____no_output_____"
],
[
"import pandas as pd\ndef load_housing_data(housing_path=HOUSING_PATH):\n csv_path = os.path.join(housing_path, 'housing.csv')\n return pd.read_csv(csv_path)\nhousing = load_housing_data()\nhousing.head()",
"_____no_output_____"
],
[
"housing.info()\nhousing.describe()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 20640 entries, 0 to 20639\nData columns (total 10 columns):\nlongitude 20640 non-null float64\nlatitude 20640 non-null float64\nhousing_median_age 20640 non-null float64\ntotal_rooms 20640 non-null float64\ntotal_bedrooms 20433 non-null float64\npopulation 20640 non-null float64\nhouseholds 20640 non-null float64\nmedian_income 20640 non-null float64\nmedian_house_value 20640 non-null float64\nocean_proximity 20640 non-null object\ndtypes: float64(9), object(1)\nmemory usage: 1.6+ MB\n"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nhousing.hist(bins=50, figsize=(20,15))\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nimport hashlib\n\ndef split_train_test(data, test_ratio):\n shuffled_indices = np.random.permutation(len(data))\n test_set_size = int(len(data)* test_ratio)\n test_indices = shuffled_indices[:test_set_size]\n train_indices = shuffled_indices[test_set_size:]\n return data.iloc[train_indices], data.iloc[test_indices]\ntrain_set, test_set = split_train_test(housing, 0.2)\nprint(len(train_set), 'train + ', len(test_set), 'test')\n\ndef test_set_check(identifier, test_ratio, hash):\n return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio\ndef split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):\n ids = data[id_column]\n in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))\n return data.loc[~in_test_set], data.loc[in_test_set]\nhousing_with_id = housing.reset_index()\ntrain_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'index')",
"16512 train + 4128 test\n"
],
[
"housing_with_id[\"id\"] = housing[\"longitude\"] * 1000 + housing[\"latitude\"]\ntrain_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'id')",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\ntrain_set, test_set = train_test_split(housing, test_size=.2, random_state=42)",
"_____no_output_____"
],
[
"housing['income_cat'] = np.ceil(housing['median_income'] / 1.5)\nhousing['income_cat'].where(housing['income_cat'] < 5, 5.0, inplace=True)",
"_____no_output_____"
],
[
"from sklearn.model_selection import StratifiedShuffleSplit\nsplit = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\nfor train_index, test_index in split.split(housing, housing['income_cat']):\n strat_train_set = housing.loc[train_index]\n strat_test_set = housing.loc[test_index]",
"_____no_output_____"
],
[
"housing['income_cat'].value_counts() / len(housing)",
"_____no_output_____"
],
[
"for set in (strat_train_set, strat_test_set):\n set.drop(['income_cat'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"housing = strat_train_set.copy()",
"_____no_output_____"
],
[
"housing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,\n s=housing['population']/100, label='population',\n c='median_house_value', cmap=plt.get_cmap('jet'), colorbar=True,\n )\nplt.legend()\n",
"_____no_output_____"
],
[
"corr_matrix = housing.corr()",
"_____no_output_____"
],
[
"corr_matrix['median_house_value'].sort_values(ascending=False)",
"_____no_output_____"
],
[
"from pandas.tools.plotting import scatter_matrix\nattributes = ['median_house_value', 'median_income', 'total_rooms',\n 'housing_median_age']\nscatter_matrix(housing[attributes], figsize=(12,8))",
"/home/catherinej/.local/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: 'pandas.tools.plotting.scatter_matrix' is deprecated, import 'pandas.plotting.scatter_matrix' instead.\n after removing the cwd from sys.path.\n"
],
[
"housing.plot(kind='scatter', x='median_income', y='median_house_value', alpha=0.1)",
"_____no_output_____"
],
[
"housing['rooms_per_household'] = housing['total_rooms']/housing['households']\nhousing['bedrooms_per_room'] = housing['total_bedrooms']/housing['total_rooms']\nhousing['population_per_houshold'] = housing['population']/housing['households']",
"_____no_output_____"
],
[
"corr_matrix = housing.corr()\ncorr_matrix['median_house_value'].sort_values(ascending=False)",
"_____no_output_____"
],
[
"housing = strat_train_set.drop('median_house_value', axis=1)\nhousing_labels = strat_train_set['median_house_value'].copy()\nhousing_labels",
"_____no_output_____"
],
[
"housing.dropna(subset=['total_bedrooms'])\nfrom sklearn.preprocessing import Imputer\nimputer = Imputer(strategy='median')\nhousing_num = housing.drop('ocean_proximity', axis=1)\nimputer.fit(housing_num)\nimputer.statistics_",
"_____no_output_____"
],
[
"housing_tr = pd.DataFrame(imputer.transform(housing_num), columns=housing_num.columns)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder\nencoder = LabelEncoder()\nhousing_cat = housing['ocean_proximity']\nhousing_cat_encoded = encoder.fit_transform(housing_cat)\nhousing_cat_encoded",
"_____no_output_____"
],
[
"print(encoder.classes_)\nfrom sklearn.preprocessing import OneHotEncoder\nencoder = OneHotEncoder()\nhousing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))\nhousing_cat_1hot",
"['<1H OCEAN' 'INLAND' 'ISLAND' 'NEAR BAY' 'NEAR OCEAN']\n"
]
],
[
[
"from sklearn.preprocessing import LabelBinarizer\nencoder = LabelBinarizer()\nhousing_cat_1hot = encoder.fit_transform(housing_cat)\nhousing_cat_1hot",
"_____no_output_____"
]
],
[
[
"from sklearn.base import BaseEstimator, TransformerMixin\nrooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6\nclass CombineAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__(self, add_bedrooms_per_room = True):\n self.add_bedrooms_per_room = add_bedrooms_per_room\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_ix] / X[:, household_ix]\n population_per_household = X[:, population_ix] / X[:,household_ix]\n if self.add_bedrooms_per_room:\n bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]\n return np.c_[X, rooms_per_household, population_per_household,\n bedrooms_per_room]\n else:\n return np.c_[X, rooms_per_household, population_per_household]\nattr_adder = CombineAttributesAdder(add_bedrooms_per_room=False)\nhousing_extra_attribs = attr_adder.transform(housing.values)\n\n ",
"_____no_output_____"
],
[
"from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\nnum_pipeline = Pipeline([\n ('imputer', Imputer(strategy='median')),\n ('attribs_adder', CombineAttributesAdder()),\n ('std_scaler', StandardScaler()),\n])\nhousing_num_tr = num_pipeline.fit_transform(housing_num)",
"_____no_output_____"
],
[
"from sklearn.pipeline import FeatureUnion\nfrom sklearn.preprocessing import LabelBinarizer\nclass DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.attribute_names].values\n \nnum_attribs = list(housing_num)\ncat_attribs = ['ocean_proximity']\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', Imputer(strategy='median')),\n ('attribs_adder', CombineAttributesAdder()),\n ('std_scaler', StandardScaler())\n])\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', LabelBinarizer()),\n])\nfull_pipeline = FeatureUnion(transformer_list=[\n ('num_pipeline', num_pipeline),\n ('cat_pipeline', cat_pipeline),\n])\nhousing_prepared = full_pipeline.fit_transform(housing)\nhousing_prepared",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)\nsome_data = housing.iloc[:5]\nsome_labels = housing_labels.iloc[:5]\nsome_data_prepared = full_pipeline.transform(some_data)\nprint('Predicts:\\t', lin_reg.predict(some_data_prepared))\nprint('Labels: \\t', list(some_labels))",
"Predicts:\t [210644.60459286 317768.80697211 210956.43331178 59218.98886849\n 189747.55849879]\nLabels: \t [286600.0, 340600.0, 196900.0, 46300.0, 254500.0]\n"
],
[
"from sklearn.metrics import mean_squared_error\nhousing_predicts = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predicts)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeRegressor\ntree_reg = DecisionTreeRegressor()\ntree_reg.fit(housing_prepared, housing_labels)",
"_____no_output_____"
],
[
"housing_predictions = tree_reg.predict(housing_prepared)\ntree_mse = mean_squared_error(housing_labels, housing_predictions)\ntree_rmse = np.sqrt(tree_mse)\ntree_rmse",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score\nscores = cross_val_score(tree_reg, housing_prepared, housing_labels,\n scoring='neg_mean_squared_error', cv=10)\nrmse_scores = np.sqrt(-scores)\ndef display_scores(scores):\n print('Scores:', scores)\n print(\"Mean\", scores.mean())\n print('Standard Deviation:', scores.std())\ndisplay_scores(rmse_scores)",
"Scores: [68485.64883892 68177.24019019 70352.68707273 68687.28083179\n 69768.71970464 74656.3319954 71274.33003731 70810.50444645\n 76836.78176908 69063.31286269]\nMean 70811.28377491952\nStandard Deviation: 2692.88394466027\n"
],
[
"lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,\n scoring='neg_mean_squared_error', cv=10)\nlin_rmse_scores = np.sqrt(-lin_scores)\ndisplay_scores(lin_rmse_scores)",
"Scores: [66761.51194567 66962.64795527 70349.92996432 74756.38194304\n 68031.13388938 71193.84183426 64967.93464896 68262.2267385\n 71529.15775812 67665.10082067]\nMean 69047.98674981864\nStandard Deviation: 2735.488028188857\n"
],
[
"from sklearn.ensemble import RandomForestRegressor\nforest_reg = RandomForestRegressor()\nforest_reg.fit(housing_prepared, housing_labels)\nforest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,\n scoring='neg_mean_squared_error', cv=10)\nforest_rmse_scores = np.sqrt(-forest_scores)\nforest_rmse_scores\ndisplay_scores(forest_rmse_scores)",
"Scores: [51928.86090718 49869.45225738 53082.11480927 54966.8373482\n 51754.13588618 55397.60103713 51843.02721478 49886.96142627\n 55957.30923817 53840.56665421]\nMean 52852.68667787641\nStandard Deviation: 2058.842915654273\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb59fb5e22d4d3c277378cd046c2ef2e64b5b4d2 | 1,862 | ipynb | Jupyter Notebook | 2021 Осенний семестр/Практическое задание 4_5/Маслихин_Задание 4_5_3.ipynb | mosalov/Notebook_For_AI_Main | a693d29bf0bdcf824cb4f1eca86ff54b67ba7428 | [
"MIT"
] | 6 | 2021-09-20T10:28:18.000Z | 2022-03-14T18:39:17.000Z | 2021 Осенний семестр/Практическое задание 4_5/Маслихин_Задание 4_5_3.ipynb | mosalov/Notebook_For_AI_Main | a693d29bf0bdcf824cb4f1eca86ff54b67ba7428 | [
"MIT"
] | 122 | 2020-09-07T11:57:57.000Z | 2022-03-22T06:47:03.000Z | 2021 Осенний семестр/Практическое задание 4_5/Маслихин_Задание 4_5_3.ipynb | mosalov/Notebook_For_AI_Main | a693d29bf0bdcf824cb4f1eca86ff54b67ba7428 | [
"MIT"
] | 97 | 2020-09-07T11:32:19.000Z | 2022-03-31T10:27:38.000Z | 1,862 | 1,862 | 0.690118 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot\ndf = pd.read_csv('winequality-white.csv', ';')\ndf1 = df[df['residual sugar'] < 1][['density', 'pH']]\ndf2 = df1[df1.index % 2 == 0]\nprint(df2)\n",
" density pH\n54 0.99300 3.05\n172 0.99020 3.03\n372 0.99340 3.34\n1164 0.99280 3.07\n1166 0.99270 3.23\n1322 0.99200 3.54\n1340 0.99220 3.27\n1366 0.99140 3.25\n1486 0.99320 3.30\n2418 0.99274 3.16\n2468 0.99155 3.25\n2512 0.99112 3.28\n2754 0.98985 3.13\n2888 0.99180 3.04\n2898 0.99104 2.99\n2904 0.99114 3.31\n2934 0.99215 3.30\n2936 0.99218 3.30\n2956 0.99026 2.80\n2968 0.99038 2.93\n2970 0.99038 2.93\n2996 0.98942 3.11\n3136 0.99026 3.50\n3170 0.99126 3.02\n3212 0.99033 3.25\n3526 0.98984 3.01\n3554 0.99062 2.92\n3626 0.99048 3.21\n3924 0.99249 2.95\n4106 0.99119 3.16\n4682 0.98990 3.11\n4804 0.98934 3.24\n4878 0.99234 3.24\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
cb5a12fa82d8e3ac908499731ea9a3476ac4b803 | 10,816 | ipynb | Jupyter Notebook | notebooks/3_lfp.ipynb | magland/spyglass | 7878427a10101ababd655ccf2d89bfc58173fcf2 | [
"MIT"
] | 14 | 2020-02-04T20:05:02.000Z | 2022-03-13T18:13:20.000Z | notebooks/3_lfp.ipynb | magland/spyglass | 7878427a10101ababd655ccf2d89bfc58173fcf2 | [
"MIT"
] | 118 | 2020-06-15T16:40:48.000Z | 2022-03-21T17:25:47.000Z | notebooks/3_lfp.ipynb | magland/spyglass | 7878427a10101ababd655ccf2d89bfc58173fcf2 | [
"MIT"
] | 16 | 2020-02-04T19:04:07.000Z | 2022-03-18T21:15:32.000Z | 30.382022 | 217 | 0.574612 | [
[
[
"# This notebook shows an example where a set of electrodes are selected from a dataset and then LFP is extracted from those electrodes and then written to a new NWB file\n",
"_____no_output_____"
]
],
[
[
"import pynwb\nimport os\n\n#DataJoint and DataJoint schema\nimport datajoint as dj\n\n## We also import a bunch of tables so that we can call them easily\nfrom nwb_datajoint.common import (RawPosition, HeadDir, Speed, LinPos, StateScriptFile, VideoFile,\n DataAcquisitionDevice, CameraDevice, Probe,\n DIOEvents,\n ElectrodeGroup, Electrode, Raw, SampleCount,\n LFPSelection, LFP, LFPBandSelection, LFPBand,\n SortGroup, SpikeSorting, SpikeSorter, SpikeSorterParameters, SpikeSortingWaveformParameters, SpikeSortingParameters, SpikeSortingMetrics, CuratedSpikeSorting,\\\n FirFilter,\n IntervalList, SortInterval,\n Lab, LabMember, LabTeam, Institution,\n BrainRegion,\n SensorData,\n Session, ExperimenterList,\n Subject,\n Task, TaskEpoch,\n Nwbfile, AnalysisNwbfile, NwbfileKachery, AnalysisNwbfileKachery,\n interval_list_contains,\n interval_list_contains_ind,\n interval_list_excludes,\n interval_list_excludes_ind,\n interval_list_intersect,\n get_electrode_indices)\n\nimport warnings\nwarnings.simplefilter('ignore', category=DeprecationWarning)\nwarnings.simplefilter('ignore', category=ResourceWarning)",
"_____no_output_____"
]
],
[
[
"#### Next we select the NWB file, which corresponds to the dataset we want to extract LFP from",
"_____no_output_____"
]
],
[
[
"nwb_file_names = Nwbfile().fetch('nwb_file_name')\n# take the first one for this demonstration\nnwb_file_name = nwb_file_names[0]\nprint(nwb_file_name)\n",
"_____no_output_____"
]
],
[
[
"#### Create the standard LFP Filters. This only needs to be done once.",
"_____no_output_____"
]
],
[
[
"FirFilter().create_standard_filters()",
"_____no_output_____"
]
],
[
[
"#### Now we Select every 16th electrode for LFP or, below, a specific set of electrodes. Choose one\nNote that this will delete the current selection, and all downstream LFP and LFPBand information (if it exists), but only for the current dataset. This is fine to do if you want to generate or regenerate the LFP",
"_____no_output_____"
]
],
[
[
"electrode_ids = (Electrode & {'nwb_file_name' : nwb_file_name}).fetch('electrode_id')\nlfp_electrode_ids = electrode_ids[range(0, len(electrode_ids), 128)]\nLFPSelection().set_lfp_electrodes(nwb_file_name, lfp_electrode_ids.tolist())\n",
"_____no_output_____"
],
[
"LFPSelection().LFPElectrode() & {'nwb_file_name' : nwb_file_name}",
"_____no_output_____"
]
],
[
[
"### Or select one electrode for LFP\n",
"_____no_output_____"
]
],
[
[
"LFPSelection().set_lfp_electrodes(nwb_file_name, [0, 1])",
"_____no_output_____"
],
[
"LFPSelection().LFPElectrode() & {'nwb_file_name':nwb_file_name}",
"_____no_output_____"
]
],
[
[
"### Populate the LFP table. Note that this takes 2 hours or so on a laptop if you use all electrodes",
"_____no_output_____"
]
],
[
[
"LFP().populate([LFPSelection & {'nwb_file_name':nwb_file_name}])",
"_____no_output_____"
]
],
[
[
"### Now that we've created the LFP object we can perform a second level of filtering for a band of interest, in this case the theta band\nWe first need to create the filter",
"_____no_output_____"
]
],
[
[
"lfp_sampling_rate = (LFP() & {'nwb_file_name' : nwb_file_name}).fetch1('lfp_sampling_rate')\nfilter_name = 'Theta 5-11 Hz'\nFirFilter().add_filter(filter_name, lfp_sampling_rate, 'bandpass', [4, 5, 11, 12], 'theta filter for 1 Khz data')",
"_____no_output_____"
],
[
"FirFilter()",
"_____no_output_____"
]
],
[
[
"Next we add an entry for the LFP Band and the electrodes we want to filter",
"_____no_output_____"
]
],
[
[
"# assume that we've filtered these electrodes; change this if not\nlfp_band_electrode_ids = [1]\n\n# set the interval list name corresponding to the second epoch (a run session)\ninterval_list_name = '02_r1'\n\n# set the reference to -1 to indicate no reference for all channels\nref_elect = [-1]\n\n# desired sampling rate\nlfp_band_sampling_rate = 100",
"_____no_output_____"
],
[
"LFPBandSelection().set_lfp_band_electrodes(nwb_file_name, lfp_band_electrode_ids, filter_name, interval_list_name, ref_elect, lfp_band_sampling_rate)",
"_____no_output_____"
]
],
[
[
"Check to make sure it worked",
"_____no_output_____"
]
],
[
[
"(LFPBandSelection() & {'nwb_file_name' : nwb_file_name})",
"_____no_output_____"
],
[
"LFPBand().populate(LFPBandSelection() & {'nwb_file_name' : nwb_file_name})\nLFPBand()",
"_____no_output_____"
]
],
[
[
"### Now we can plot the original signal, the LFP filtered trace, and the theta filtered trace together.\nMuch of the code below could be replaced by a function calls that would return the data from each electrical series",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"#get the three electrical series objects and the indeces of the electrodes we band pass filtered\norig_eseries = (Raw() & {'nwb_file_name' : nwb_file_name}).fetch_nwb()[0]['raw']\norig_elect_indeces = get_electrode_indices(orig_eseries, lfp_band_electrode_ids)\n\nlfp_eseries = (LFP() & {'nwb_file_name' : nwb_file_name}).fetch_nwb()[0]['lfp']\nlfp_elect_indeces = get_electrode_indices(lfp_eseries, lfp_band_electrode_ids)\n\nlfp_band_eseries = (LFPBand() & {'nwb_file_name' : nwb_file_name}).fetch_nwb()[0]['filtered_data']\nlfp_band_elect_indeces = get_electrode_indices(lfp_band_eseries, lfp_band_electrode_ids)\n",
"_____no_output_____"
],
[
"# get a list of times for the first run epoch and then select a 2 second interval 100 seconds from the beginning\nrun1times = (IntervalList & {'nwb_file_name': nwb_file_name, 'interval_list_name' : '02_r1'}).fetch1('valid_times')\nplottimes = [run1times[0][0] + 101, run1times[0][0] + 102]",
"_____no_output_____"
],
[
"# get the time indeces for each dataset\norig_time_ind = np.argwhere(np.logical_and(orig_eseries.timestamps > plottimes[0], orig_eseries.timestamps < plottimes[1]))\n\nlfp_time_ind = np.argwhere(np.logical_and(lfp_eseries.timestamps > plottimes[0], lfp_eseries.timestamps < plottimes[1]))\nlfp_band_time_ind = np.argwhere(np.logical_and(lfp_band_eseries.timestamps > plottimes[0], lfp_band_eseries.timestamps < plottimes[1]))",
"_____no_output_____"
],
[
"plt.plot(orig_eseries.timestamps[orig_time_ind], orig_eseries.data[orig_time_ind,orig_elect_indeces[0]], 'k-')\nplt.plot(lfp_eseries.timestamps[lfp_time_ind], lfp_eseries.data[lfp_time_ind,lfp_elect_indeces[0]], 'b-')\nplt.plot(lfp_band_eseries.timestamps[lfp_band_time_ind], lfp_band_eseries.data[lfp_band_time_ind,lfp_band_elect_indeces[0]], 'r-')\nplt.xlabel('Time (sec)')\nplt.ylabel('Amplitude (AD units)')\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a16417a988a3ae85cde7e31777f9e129622f3 | 204,423 | ipynb | Jupyter Notebook | chapter05_hpc/05_cython.ipynb | tkondoh1022/cookbook-2nd-code | 52d8c6c0a199a9009dfaccbac3cc2e8dfd21a859 | [
"MIT"
] | 645 | 2018-02-01T09:16:45.000Z | 2022-03-03T17:47:59.000Z | chapter05_hpc/05_cython.ipynb | wangbin0619/cookbook-2nd-code | acd2ea2e55838f9bb3fc92a23aa991b3320adcaf | [
"MIT"
] | 3 | 2019-03-11T09:47:21.000Z | 2022-01-11T06:32:00.000Z | chapter05_hpc/05_cython.ipynb | wangbin0619/cookbook-2nd-code | acd2ea2e55838f9bb3fc92a23aa991b3320adcaf | [
"MIT"
] | 418 | 2018-02-13T03:17:05.000Z | 2022-03-18T21:04:45.000Z | 1,216.803571 | 107,750 | 0.94143 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb5a1ec6e359f47c89e52898c232f52a46dfb6f5 | 35,219 | ipynb | Jupyter Notebook | training/notebooks/templates/object-detection-training.ipynb | Pandinosaurus/tfjs-object-detection-training | b05fb453101be477c070935648beb8a66bc10dd5 | [
"MIT"
] | null | null | null | training/notebooks/templates/object-detection-training.ipynb | Pandinosaurus/tfjs-object-detection-training | b05fb453101be477c070935648beb8a66bc10dd5 | [
"MIT"
] | null | null | null | training/notebooks/templates/object-detection-training.ipynb | Pandinosaurus/tfjs-object-detection-training | b05fb453101be477c070935648beb8a66bc10dd5 | [
"MIT"
] | null | null | null | 36.159138 | 408 | 0.582328 | [
[
[
"# Train a model using Watson Studio and deploy it in Watson Machine Learning\n\nThis notebook will show how to use your annotated images from Cloud Annotations to train an Object Detection model using a Python Notebook in Watson Studio. After training and testing, some extra steps will show how to deploy this model in Watson Machine Learning as an online API. You can use this API from any application afterwards.\n\nAs a suggestion you can use this dataset from Kaggle to test Cloud Annotation and this notebook: https://www.kaggle.com/issaisasank/guns-object-detection",
"_____no_output_____"
],
[
"### Specify the credentials for the bucket you used in Cloud Annoations",
"_____no_output_____"
]
],
[
[
"# credentials = {\n# 'BUCKET': '$$$BUCKET$$$',\n# 'IBM_API_KEY_ID': '$$$IBM_API_KEY_ID$$$',\n# 'IAM_SERVICE_ID': '$$$IAM_SERVICE_ID$$$',\n# 'ENDPOINT': '$$$ENDPOINT$$$',\n# }\ncredentials = {\n 'IAM_SERVICE_ID': 'iam-ServiceId-f0afd6e2-22d6-433a-91ce-4d02fab0a8e0',\n 'IBM_API_KEY_ID': 'Q5ZIqOmUOt9PB2lOZX4n1RzHUO-E_kYQ3RFhbSsEtjfm',\n 'ENDPOINT': 'https://s3.us.cloud-object-storage.appdomain.cloud',\n 'IBM_AUTH_ENDPOINT': 'https://iam.cloud.ibm.com/oidc/token',\n 'BUCKET': 'guns-object-detection'\n}",
"_____no_output_____"
]
],
[
[
"# Setup",
"_____no_output_____"
]
],
[
[
"import os\nimport shutil\n\nif os.path.exists('tmp') and os.path.isdir('tmp'):\n shutil.rmtree('tmp')\n\nCLOUD_ANNOTATIONS_DATA = os.path.join('tmp', credentials['BUCKET'])\n\nos.makedirs(CLOUD_ANNOTATIONS_DATA, exist_ok=True)",
"_____no_output_____"
],
[
"import json\nimport ibm_boto3\nfrom ibm_botocore.client import Config, ClientError\n\ndef download_file_cos(local_file_name, key): \n '''\n Wrapper function to download a file from cloud object storage using the\n credential dict provided and loading it into memory\n '''\n cos = ibm_boto3.client(\"s3\",\n ibm_api_key_id=credentials['IBM_API_KEY_ID'],\n ibm_service_instance_id=credentials['IBM_API_KEY_ID'],\n config=Config(signature_version=\"oauth\"),\n endpoint_url=credentials['ENDPOINT']\n )\n try:\n res=cos.download_file(Bucket=credentials['BUCKET'], Key=key, Filename=local_file_name)\n except Exception as e:\n print('Exception', e)\n else:\n print('File Downloaded')\n \ndef get_annotations(): \n cos = ibm_boto3.client(\"s3\",\n ibm_api_key_id=credentials['IBM_API_KEY_ID'],\n ibm_service_instance_id=credentials['IBM_API_KEY_ID'],\n config=Config(signature_version=\"oauth\"),\n endpoint_url=credentials['ENDPOINT']\n )\n try:\n return json.loads(cos.get_object(Bucket=credentials['BUCKET'], Key='_annotations.json')['Body'].read())\n except Exception as e:\n print('Exception', e)",
"_____no_output_____"
],
[
"annotations = get_annotations()\n\ndownload_file_cos(os.path.join(CLOUD_ANNOTATIONS_DATA, '_annotations.json'), '_annotations.json')\n\nfor image in annotations['annotations'].keys():\n local_path = os.path.join(CLOUD_ANNOTATIONS_DATA, image)\n download_file_cos(local_path, image)",
"_____no_output_____"
],
[
"NUM_TRAIN_STEPS = 500\nMODEL_TYPE = 'ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18'\nCONFIG_TYPE = 'ssd_mobilenet_v1_quantized_300x300_coco14_sync'\n\nimport os\nCLOUD_ANNOTATIONS_MOUNT = os.path.join('tmp', credentials['BUCKET'])\nANNOTATIONS_JSON_PATH = os.path.join(CLOUD_ANNOTATIONS_MOUNT, '_annotations.json')\n\nCHECKPOINT_PATH = 'tmp/checkpoint'\nOUTPUT_PATH = 'tmp/output'\nEXPORTED_PATH = 'tmp/exported'\nDATA_PATH = 'tmp/data'\n\nLABEL_MAP_PATH = os.path.join(DATA_PATH, 'label_map.pbtxt')\nTRAIN_RECORD_PATH = os.path.join(DATA_PATH, 'train.record')\nVAL_RECORD_PATH = os.path.join(DATA_PATH, 'val.record')",
"_____no_output_____"
]
],
[
[
"## Installing dependencies\n\nIn the next cell we will install the libraries that will be used. Since we are using an older version of Tensorflow and Numpy, compared to the version that is already installed by default in your environment. We highly suggest creating a custom environment in your Watson Studio project for this notebook, using the following configuration:\n\n``````\n# Modify the following content to add a software customization to an environment.\n# To remove an existing customization, delete the entire content and click Apply.\n# The customizations must follow the format of a conda environment yml file.\n\n# Add conda channels below defaults, indented by two spaces and a hyphen.\nchannels:\n - defaults\n\n# To add packages through conda or pip, remove the # on the following line.\ndependencies:\n\n# Add conda packages here, indented by two spaces and a hyphen.\n# Remove the # on the following line and replace sample package name with your package name:\n\n# Add pip packages here, indented by four spaces and a hyphen.\n# Remove the # on the following lines and replace sample package name with your package name.\n - pip:\n - numpy==1.19.5\n - tensorflow==1.15.2\n``````\n\nUse Python 3.7 and any hardware configuration without CPU that you would like. This notebook was not prepared to support training using GPUs in Watson Studio. Use the next cell to install the other dependencies as normal. After creating the environment you will have to change it using the **Information** tab, on the right side menu.",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport pathlib\n\n# Clone the tensorflow models repository if it doesn't already exist\nif \"models\" in pathlib.Path.cwd().parts:\n while \"models\" in pathlib.Path.cwd().parts:\n os.chdir('..')\nelif not pathlib.Path('models').exists():\n !git clone --depth 1 https://github.com/cloud-annotations/models\n\n# !pip uninstall Cython -y\n# !pip uninstall tf_slim -y\n# !pip uninstall opencv-python-headless -y\n# !pip uninstall lvis -y\n# !pip uninstall pycocotools -y\n# !pip uninstall numpy -y\n# !pip uninstall tensorflow -y \n\n# !pip install numpy==1.19.5\n# !pip install tensorflow==1.15.2\n!pip install Cython\n!pip install tf_slim\n!pip install opencv-python-headless\n!pip install lvis --no-deps\n!pip install pycocotools\n\n%cd models/research\n!protoc object_detection/protos/*.proto --python_out=.\n\npwd = os.getcwd()\n# we need to set both PYTHONPATH for shell scripts and sys.path for python cells\nsys.path.append(pwd)\nsys.path.append(os.path.join(pwd, 'slim'))\nif 'PYTHONPATH' in os.environ:\n os.environ['PYTHONPATH'] += f':{pwd}:{pwd}/slim'\nelse:\n os.environ['PYTHONPATH'] = f':{pwd}:{pwd}/slim'\n%cd ../..",
"_____no_output_____"
]
],
[
[
"## Testing Tensorflow",
"_____no_output_____"
]
],
[
[
"%cd models/research\n!python object_detection/builders/model_builder_tf1_test.py\n%cd ../..",
"_____no_output_____"
]
],
[
[
"# Generate a Label Map\n\nOne piece of data the Object Detection API needs is a label map protobuf. The label map associates an integer id to the text representation of the label. The ids are indexed by 1, meaning the first label will have an id of 1 not 0.\n\nHere is an example of what a label map looks like:\n\n````\nitem {\n id: 1\n name: 'Cat'\n}\n\nitem {\n id: 2\n name: 'Dog'\n}\n\nitem {\n id: 3\n name: 'Gold Fish'\n}\n````\n",
"_____no_output_____"
]
],
[
[
"import os\nimport json\n\n# Get a list of labels from the annotations.json\nlabels = {}\nwith open(ANNOTATIONS_JSON_PATH) as f:\n annotations = json.load(f)\n labels = annotations['labels']\n\n# Create a file named label_map.pbtxt\nos.makedirs(DATA_PATH, exist_ok=True)\nwith open(LABEL_MAP_PATH, 'w') as f:\n # Loop through all of the labels and write each label to the file with an id\n for idx, label in enumerate(labels):\n f.write('item {\\n')\n f.write(\"\\tname: '{}'\\n\".format(label))\n f.write('\\tid: {}\\n'.format(idx + 1)) # indexes must start at 1\n f.write('}\\n')",
"_____no_output_____"
]
],
[
[
"# Generate TFRecords\n\nThe TensorFlow Object Detection API expects our data to be in the format of TFRecords.\n\nThe TFRecord format is a collection of serialized feature dicts, one for each image, looking something like this:\n\n````\n{\n 'image/height': 1800,\n 'image/width': 2400,\n 'image/filename': 'image1.jpg',\n 'image/source_id': 'image1.jpg',\n 'image/encoded': ACTUAL_ENCODED_IMAGE_DATA_AS_BYTES,\n 'image/format': 'jpeg',\n 'image/object/bbox/xmin': [0.7255949630314233, 0.8845598428835489],\n 'image/object/bbox/xmax': [0.9695875693160814, 1.0000000000000000],\n 'image/object/bbox/ymin': [0.5820120073891626, 0.1829972290640394],\n 'image/object/bbox/ymax': [1.0000000000000000, 0.9662484605911330],\n 'image/object/class/text': (['Cat', 'Dog']),\n 'image/object/class/label': ([1, 2])\n}\n````\n",
"_____no_output_____"
]
],
[
[
"import os\nimport io\nimport json\nimport random\n\nimport PIL.Image\nimport tensorflow as tf\n\nfrom object_detection.utils import dataset_util\nfrom object_detection.utils import label_map_util\n\ndef create_tf_record(images, annotations, label_map, image_path, output):\n # Create a train.record TFRecord file.\n with tf.python_io.TFRecordWriter(output) as writer:\n # Loop through all the training examples.\n for image_name in images:\n try:\n # Make sure the image is actually a file\n img_path = os.path.join(image_path, image_name) \n if not os.path.isfile(img_path):\n continue\n\n # Read in the image.\n with tf.gfile.GFile(img_path, 'rb') as fid:\n encoded_jpg = fid.read()\n\n # Open the image with PIL so we can check that it's a jpeg and get the image\n # dimensions.\n encoded_jpg_io = io.BytesIO(encoded_jpg)\n image = PIL.Image.open(encoded_jpg_io)\n if image.format != 'JPEG':\n raise ValueError('Image format not JPEG')\n\n width, height = image.size\n\n # Initialize all the arrays.\n xmins = []\n xmaxs = []\n ymins = []\n ymaxs = []\n classes_text = []\n classes = []\n\n # The class text is the label name and the class is the id. If there are 3\n # cats in the image and 1 dog, it may look something like this:\n # classes_text = ['Cat', 'Cat', 'Dog', 'Cat']\n # classes = [ 1 , 1 , 2 , 1 ]\n\n # For each image, loop through all the annotations and append their values.\n for a in annotations[image_name]:\n if (\"x\" in a and \"x2\" in a and \"y\" in a and \"y2\" in a):\n label = a['label']\n xmins.append(a[\"x\"])\n xmaxs.append(a[\"x2\"])\n ymins.append(a[\"y\"])\n ymaxs.append(a[\"y2\"])\n classes_text.append(label.encode(\"utf8\"))\n classes.append(label_map[label])\n\n # Create the TFExample.\n tf_example = tf.train.Example(features=tf.train.Features(feature={\n 'image/height': dataset_util.int64_feature(height),\n 'image/width': dataset_util.int64_feature(width),\n 'image/filename': dataset_util.bytes_feature(image_name.encode('utf8')),\n 'image/source_id': dataset_util.bytes_feature(image_name.encode('utf8')),\n 'image/encoded': dataset_util.bytes_feature(encoded_jpg),\n 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),\n 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),\n 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),\n 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),\n 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),\n 'image/object/class/text': dataset_util.bytes_list_feature(classes_text),\n 'image/object/class/label': dataset_util.int64_list_feature(classes),\n }))\n if tf_example:\n # Write the TFExample to the TFRecord.\n writer.write(tf_example.SerializeToString())\n except ValueError:\n print('Invalid example, ignoring.')\n pass\n except IOError:\n print(\"Can't read example, ignoring.\")\n pass\n\nwith open(ANNOTATIONS_JSON_PATH) as f:\n annotations = json.load(f)['annotations']\n image_files = [image for image in annotations.keys()]\n # Load the label map we created.\n label_map = label_map_util.get_label_map_dict(LABEL_MAP_PATH)\n\n random.seed(42)\n random.shuffle(image_files)\n num_train = int(0.7 * len(image_files))\n train_examples = image_files[:num_train]\n val_examples = image_files[num_train:]\n\n create_tf_record(train_examples, annotations, label_map, CLOUD_ANNOTATIONS_MOUNT, TRAIN_RECORD_PATH)\n create_tf_record(val_examples, annotations, label_map, CLOUD_ANNOTATIONS_MOUNT, VAL_RECORD_PATH)",
"_____no_output_____"
]
],
[
[
"# Download a base model\n\nTraining a model from scratch can take days and tons of data. We can mitigate this by using a pretrained model checkpoint. Instead of starting from nothing, we can add to what was already learned with our own data.\n\nThere are several pretrained model checkpoints that can be downloaded from the model zoo.\n\nThe model we will be training is the SSD MobileNet architecture. SSD MobileNet models have a very small file size and can execute very quickly, compromising little accuracy, which makes it perfect for running in the browser. Additionally, we will be using quantization. When we say the model is quantized it means instead of using float32 as the datatype of our numbers we are using float16 or int8.\n\n````\nfloat32(PI) = 3.1415927 32 bits\nfloat16(PI) = 3.14 16 bits\nint8(PI) = 3 8 bits\n````\n\nWe do this because it can cut our model size down by around a factor of 4! An unquantized version of SSD MobileNet that I trained was 22.3 MB, but the quantized version was 5.7 MB that's a ~75% reduction 🎉",
"_____no_output_____"
]
],
[
[
"import os\nimport tarfile\n\nimport six.moves.urllib as urllib\n\ndownload_base = 'http://download.tensorflow.org/models/object_detection/'\nmodel = MODEL_TYPE + '.tar.gz'\ntmp = 'tmp/checkpoint.tar.gz'\n\nif not (os.path.exists(CHECKPOINT_PATH)):\n # Download the checkpoint\n opener = urllib.request.URLopener()\n opener.retrieve(download_base + model, tmp)\n\n # Extract all the `model.ckpt` files.\n with tarfile.open(tmp) as tar:\n for member in tar.getmembers():\n member.name = os.path.basename(member.name)\n if 'model.ckpt' in member.name:\n tar.extract(member, path=CHECKPOINT_PATH)\n\n os.remove(tmp)",
"_____no_output_____"
]
],
[
[
"# Model Config\n\nThe final thing we need to do is inject our pipline with the amount of labels we have and where to find the label map, TFRecord and model checkpoint. We also need to change the the batch size, because the default batch size of 128 is too large for Colab to handle.",
"_____no_output_____"
]
],
[
[
"\n#from google.protobuf import text_format\n\nfrom object_detection.utils import config_util\nfrom object_detection.utils import label_map_util\n\npipeline_skeleton = 'models/research/object_detection/samples/configs/' + CONFIG_TYPE + '.config'\nconfigs = config_util.get_configs_from_pipeline_file(pipeline_skeleton)\n\nlabel_map = label_map_util.get_label_map_dict(LABEL_MAP_PATH)\nnum_classes = len(label_map.keys())\nmeta_arch = configs[\"model\"].WhichOneof(\"model\")\n\noverride_dict = {\n 'model.{}.num_classes'.format(meta_arch): num_classes,\n 'train_config.batch_size': 24,\n 'train_input_path': TRAIN_RECORD_PATH,\n 'eval_input_path': VAL_RECORD_PATH,\n 'train_config.fine_tune_checkpoint': os.path.join(CHECKPOINT_PATH, 'model.ckpt'),\n 'label_map_path': LABEL_MAP_PATH\n}\n\nconfigs = config_util.merge_external_params_with_configs(configs, kwargs_dict=override_dict)\npipeline_config = config_util.create_pipeline_proto_from_configs(configs)\nconfig_util.save_pipeline_config(pipeline_config, DATA_PATH)",
"_____no_output_____"
]
],
[
[
"# Start training\n\nWe can start a training run by calling the model_main script, passing:\n\n- The location of the pipepline.config we created\n- Where we want to save the model\n- How many steps we want to train the model (the longer you train, the more potential there is to learn)\n- The number of evaluation steps (or how often to test the model) gives us an idea of how well the model is doing",
"_____no_output_____"
]
],
[
[
"!rm -rf $OUTPUT_PATH\n!python -m object_detection.model_main \\\n --pipeline_config_path=$DATA_PATH/pipeline.config \\\n --model_dir=$OUTPUT_PATH \\\n --num_train_steps=$NUM_TRAIN_STEPS \\\n --num_eval_steps=100",
"_____no_output_____"
]
],
[
[
"# Export inference graph\n\nAfter your model has been trained, you might have a few checkpoints available. A checkpoint is usually emitted every 500 training steps. Each checkpoint is a snapshot of your model at that point in training. In the event that a long running training process crashes, you can pick up at the last checkpoint instead of starting from scratch.\n\nWe need to export a checkpoint to a TensorFlow graph proto in order to actually use it. We use regex to find the checkpoint with the highest training step and export it.",
"_____no_output_____"
]
],
[
[
"import os\nimport re\nimport json\n\nfrom object_detection.utils.label_map_util import get_label_map_dict\n\nregex = re.compile(r\"model\\.ckpt-([0-9]+)\\.index\")\nnumbers = [int(regex.search(f).group(1)) for f in os.listdir(OUTPUT_PATH) if regex.search(f)]\nTRAINED_CHECKPOINT_PREFIX = os.path.join(OUTPUT_PATH, 'model.ckpt-{}'.format(max(numbers)))\n\nprint(f'Using {TRAINED_CHECKPOINT_PREFIX}')\n\n!rm -rf $EXPORTED_PATH\n!python -m object_detection.export_inference_graph \\\n --pipeline_config_path=$DATA_PATH/pipeline.config \\\n --trained_checkpoint_prefix=$TRAINED_CHECKPOINT_PREFIX \\\n --output_directory=$EXPORTED_PATH\n\nlabel_map = get_label_map_dict(LABEL_MAP_PATH)\nlabel_array = [k for k in sorted(label_map, key=label_map.get)]\n\nwith open(os.path.join(EXPORTED_PATH, 'labels.json'), 'w') as f:\n json.dump(label_array, f)",
"_____no_output_____"
]
],
[
[
"# Evaluating the results\n\nIn the next steps we will use the images from the evaluation set to **visualize** the results of our model. If you don't see any boxes in your images, consider raising the amount of training steps in the **SETUP** section or adding more training images.",
"_____no_output_____"
]
],
[
[
"\nimport os\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom PIL import Image as PImage\nfrom object_detection.utils import visualization_utils as vis_util\nfrom object_detection.utils import label_map_util\n\n# Load the labels\ncategory_index = label_map_util.create_category_index_from_labelmap(LABEL_MAP_PATH, use_display_name=True)\n\n# Load the model\npath_to_frozen_graph = os.path.join(EXPORTED_PATH, 'frozen_inference_graph.pb')\ndetection_graph = tf.Graph()\nwith detection_graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(path_to_frozen_graph, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')",
"_____no_output_____"
],
[
"bbox_images = []\nfor image_x in val_examples:\n img_path = os.path.join(CLOUD_ANNOTATIONS_MOUNT, image_x) \n with detection_graph.as_default():\n with tf.Session(graph=detection_graph) as sess:\n # Definite input and output Tensors for detection_graph\n image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')\n # Each box represents a part of the image where a particular object was detected.\n detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')\n # Each score represent how level of confidence for each of the objects.\n # Score is shown on the result image, together with the class label.\n detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')\n detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')\n num_detections = detection_graph.get_tensor_by_name('num_detections:0')\n image = PImage.open(img_path)\n # the array based representation of the image will be used later in order to prepare the\n # result image with boxes and labels on it.\n (im_width, im_height) = image.size\n image_np = np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)\n # Expand dimensions since the model expects images to have shape: [1, None, None, 3]\n image_np_expanded = np.expand_dims(image_np, axis=0)\n # Actual detection.\n (boxes, scores, classes, num) = sess.run(\n [detection_boxes, detection_scores, detection_classes, num_detections],\n feed_dict={image_tensor: image_np_expanded})\n # Visualization of the results of a detection.\n vis_util.visualize_boxes_and_labels_on_image_array(\n image_np,\n np.squeeze(boxes),\n np.squeeze(classes).astype(np.int32),\n np.squeeze(scores),\n category_index,\n use_normalized_coordinates=True,\n line_thickness=8)\n \n bbox_images.append(image_np)",
"_____no_output_____"
],
[
"%matplotlib inline\n\nfig = plt.figure(figsize=(50, 50)) # width, height in inches\n\nfor i,bbox_image in enumerate(bbox_images):\n sub = fig.add_subplot(len(bbox_images)+1, 1, i + 1)\n sub.imshow(bbox_image, interpolation='nearest')",
"_____no_output_____"
]
],
[
[
"### Here you can choose different images from the array to see it in more detail",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nplt.figure(figsize=(12, 8))\nplt.imshow(bbox_images[6])",
"_____no_output_____"
]
],
[
[
"# Deploying your model in Watson Machine Leaning\n\nIn the following steps we will export the artifacts that were created to a .tar file and upload the model to Watson Machine Learning. Than we will generate an online deployment using this model.\n\nYou will need a Watson Machine Leaning instance and an IAM API Key in IBM Cloud that has access to this instance. See the steps in the documentation:\n\nhttps://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html\n\nAlso, in the new version of WML you will need a Deployment Space and it's ID\n\nhttps://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html?audience=wdp",
"_____no_output_____"
]
],
[
[
"!ls $EXPORTED_PATH/saved_model",
"_____no_output_____"
],
[
"!tar -zcvf guns-object-detection-model.tar.gz -C $EXPORTED_PATH/saved_model .",
"_____no_output_____"
],
[
"from ibm_watson_machine_learning import APIClient\n\nwml_credentials = {\n \"url\": \"https://us-south.ml.cloud.ibm.com\",\n \"apikey\":\"<apikey>\"\n }\n\nclient = APIClient(wml_credentials)",
"_____no_output_____"
],
[
"client.set.default_space(\"<deployment-space-id>\")",
"_____no_output_____"
],
[
"client.software_specifications.list()",
"_____no_output_____"
],
[
"model_spec = client.software_specifications.get_id_by_name('tensorflow_1.15-py3.6')",
"_____no_output_____"
],
[
"model_meta = {\n client.repository.ModelMetaNames.NAME : \"Tensorflow Guns Object Detection Model\",\n client.repository.ModelMetaNames.DESCRIPTION : \"Guns Object Detection using Kaggle Dataset\",\n client.repository.ModelMetaNames.TYPE : \"tensorflow_1.15\",\n client.repository.ModelMetaNames.SOFTWARE_SPEC_UID : model_spec\n}\nmodel_details_dir = client.repository.store_model( model=\"guns-object-detection-model.tar.gz\", meta_props=model_meta )",
"_____no_output_____"
],
[
"model_id_dir = model_details_dir[\"metadata\"]['id']",
"_____no_output_____"
],
[
"client.hardware_specifications.list()",
"_____no_output_____"
],
[
"meta_props = {\n client.deployments.ConfigurationMetaNames.NAME: \"Tensorflow Guns Object Detection Deployment\",\n client.deployments.ConfigurationMetaNames.ONLINE: {},\n client.deployments.ConfigurationMetaNames.HARDWARE_SPEC : { \"id\": \"cf70f086-916d-4684-91a7-264c49c6d425\"}\n}\ndeployment_details_dir = client.deployments.create(model_id_dir, meta_props )",
"_____no_output_____"
],
[
"deployment_id = deployment_details_dir['metadata']['id']",
"_____no_output_____"
]
],
[
[
"# Test the deployed model\n\nChoose one of the images from the evaluation set to score the model using the newly created API. This step can be done in another notebook or custom code, since your deployed model is not dependent of this kernel. ",
"_____no_output_____"
]
],
[
[
"img_path = os.path.join(CLOUD_ANNOTATIONS_MOUNT, val_examples[5]) \nif os.path.isfile(img_path):\n print(\"OK\")",
"_____no_output_____"
],
[
"image = PImage.open(img_path)\n# the array based representation of the image will be used later in order to prepare the\n# result image with boxes and labels on it.\n(im_width, im_height) = image.size\nimage_np = np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)",
"_____no_output_____"
],
[
"data = image_np.tolist()\npayload_scoring = {\n \"input_data\": [{\n \"values\": [data]\n }]\n}",
"_____no_output_____"
],
[
"%%time\npredictions = client.deployments.score(deployment_id, payload_scoring)",
"_____no_output_____"
],
[
"for x in predictions['predictions']:\n if x['id'] == 'detection_scores':\n scores = x['values'][0]\n if x['id'] == 'detection_boxes':\n boxes = x['values'][0]\n if x['id'] == 'num_detections':\n num = x['values'][0]\n if x['id'] == 'detection_classes':\n classes = x['values'][0]",
"_____no_output_____"
],
[
"vis_util.visualize_boxes_and_labels_on_image_array(\n image_np,\n np.squeeze(boxes),\n np.squeeze(classes).astype(np.int32),\n np.squeeze(scores),\n category_index,\n use_normalized_coordinates=True,\n line_thickness=8)",
"_____no_output_____"
],
[
"%matplotlib inline\nplt.figure(figsize=(12, 8))\nplt.imshow(image_np)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a389927376bc5512845b76f6f5b36c6e63794 | 18,008 | ipynb | Jupyter Notebook | courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb | airman00/examples | 2a3dbce07630b3318f7b250325051af1d868f261 | [
"Apache-2.0"
] | 1 | 2020-05-26T04:14:29.000Z | 2020-05-26T04:14:29.000Z | courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb | 1zuu/examples | 5875c06c3cc76af5419986ab9d2f3d51bea43425 | [
"Apache-2.0"
] | 5 | 2021-06-08T21:59:26.000Z | 2022-02-10T02:33:10.000Z | courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb | 1zuu/examples | 5875c06c3cc76af5419986ab9d2f3d51bea43425 | [
"Apache-2.0"
] | 1 | 2020-04-21T01:27:17.000Z | 2020-04-21T01:27:17.000Z | 31.592982 | 330 | 0.488394 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Forecasting with an RNN",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c06_forecasting_with_rnn.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import, division, print_function, unicode_literals",
"_____no_output_____"
],
[
"try:\n # Use the %tensorflow_version magic if in colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\nkeras = tf.keras",
"_____no_output_____"
],
[
"def plot_series(time, series, format=\"-\", start=0, end=None, label=None):\n plt.plot(time[start:end], series[start:end], format, label=label)\n plt.xlabel(\"Time\")\n plt.ylabel(\"Value\")\n if label:\n plt.legend(fontsize=14)\n plt.grid(True)\n \ndef trend(time, slope=0):\n return slope * time\n \n \ndef seasonal_pattern(season_time):\n \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n return np.where(season_time < 0.4,\n np.cos(season_time * 2 * np.pi),\n 1 / np.exp(3 * season_time))\n\n \ndef seasonality(time, period, amplitude=1, phase=0):\n \"\"\"Repeats the same pattern at each period\"\"\"\n season_time = ((time + phase) % period) / period\n return amplitude * seasonal_pattern(season_time)\n \n \ndef white_noise(time, noise_level=1, seed=None):\n rnd = np.random.RandomState(seed)\n return rnd.randn(len(time)) * noise_level\n \n \ndef window_dataset(series, window_size, batch_size=32,\n shuffle_buffer=1000):\n dataset = tf.data.Dataset.from_tensor_slices(series)\n dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)\n dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))\n dataset = dataset.shuffle(shuffle_buffer)\n dataset = dataset.map(lambda window: (window[:-1], window[-1]))\n dataset = dataset.batch(batch_size).prefetch(1)\n return dataset\n \ndef model_forecast(model, series, window_size):\n ds = tf.data.Dataset.from_tensor_slices(series)\n ds = ds.window(window_size, shift=1, drop_remainder=True)\n ds = ds.flat_map(lambda w: w.batch(window_size))\n ds = ds.batch(32).prefetch(1)\n forecast = model.predict(ds)\n return forecast",
"_____no_output_____"
],
[
"time = np.arange(4 * 365 + 1)\n\nslope = 0.05\nbaseline = 10\namplitude = 40\nseries = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n\nnoise_level = 5\nnoise = white_noise(time, noise_level, seed=42)\n\nseries += noise\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()",
"_____no_output_____"
],
[
"split_time = 1000\ntime_train = time[:split_time]\nx_train = series[:split_time]\ntime_valid = time[split_time:]\nx_valid = series[split_time:]",
"_____no_output_____"
]
],
[
[
"## Simple RNN Forecasting",
"_____no_output_____"
]
],
[
[
"keras.backend.clear_session()\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nwindow_size = 30\ntrain_set = window_dataset(x_train, window_size, batch_size=128)\n\nmodel = keras.models.Sequential([\n keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),\n input_shape=[None]),\n keras.layers.SimpleRNN(100, return_sequences=True),\n keras.layers.SimpleRNN(100),\n keras.layers.Dense(1),\n keras.layers.Lambda(lambda x: x * 200.0)\n])\nlr_schedule = keras.callbacks.LearningRateScheduler(\n lambda epoch: 1e-7 * 10**(epoch / 20))\noptimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)\nmodel.compile(loss=keras.losses.Huber(),\n optimizer=optimizer,\n metrics=[\"mae\"])\nhistory = model.fit(train_set, epochs=100, callbacks=[lr_schedule])",
"_____no_output_____"
],
[
"plt.semilogx(history.history[\"lr\"], history.history[\"loss\"])\nplt.axis([1e-7, 1e-4, 0, 30])",
"_____no_output_____"
],
[
"keras.backend.clear_session()\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nwindow_size = 30\ntrain_set = window_dataset(x_train, window_size, batch_size=128)\nvalid_set = window_dataset(x_valid, window_size, batch_size=128)\n\nmodel = keras.models.Sequential([\n keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),\n input_shape=[None]),\n keras.layers.SimpleRNN(100, return_sequences=True),\n keras.layers.SimpleRNN(100),\n keras.layers.Dense(1),\n keras.layers.Lambda(lambda x: x * 200.0)\n])\noptimizer = keras.optimizers.SGD(lr=1.5e-6, momentum=0.9)\nmodel.compile(loss=keras.losses.Huber(),\n optimizer=optimizer,\n metrics=[\"mae\"])\nearly_stopping = keras.callbacks.EarlyStopping(patience=50)\nmodel_checkpoint = keras.callbacks.ModelCheckpoint(\n \"my_checkpoint\", save_best_only=True)\nmodel.fit(train_set, epochs=500,\n validation_data=valid_set,\n callbacks=[early_stopping, model_checkpoint])",
"_____no_output_____"
],
[
"model = keras.models.load_model(\"my_checkpoint\")",
"_____no_output_____"
],
[
"rnn_forecast = model_forecast(\n model,\n series[split_time - window_size:-1],\n window_size)[:, 0]",
"_____no_output_____"
],
[
"plt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid)\nplot_series(time_valid, rnn_forecast)",
"_____no_output_____"
],
[
"keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()",
"_____no_output_____"
]
],
[
[
"## Sequence-to-Sequence Forecasting",
"_____no_output_____"
]
],
[
[
"def seq2seq_window_dataset(series, window_size, batch_size=32,\n shuffle_buffer=1000):\n series = tf.expand_dims(series, axis=-1)\n ds = tf.data.Dataset.from_tensor_slices(series)\n ds = ds.window(window_size + 1, shift=1, drop_remainder=True)\n ds = ds.flat_map(lambda w: w.batch(window_size + 1))\n ds = ds.shuffle(shuffle_buffer)\n ds = ds.map(lambda w: (w[:-1], w[1:]))\n return ds.batch(batch_size).prefetch(1)",
"_____no_output_____"
],
[
"for X_batch, Y_batch in seq2seq_window_dataset(tf.range(10), 3,\n batch_size=1):\n print(\"X:\", X_batch.numpy())\n print(\"Y:\", Y_batch.numpy())",
"_____no_output_____"
],
[
"keras.backend.clear_session()\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nwindow_size = 30\ntrain_set = seq2seq_window_dataset(x_train, window_size,\n batch_size=128)\n\nmodel = keras.models.Sequential([\n keras.layers.SimpleRNN(100, return_sequences=True,\n input_shape=[None, 1]),\n keras.layers.SimpleRNN(100, return_sequences=True),\n keras.layers.Dense(1),\n keras.layers.Lambda(lambda x: x * 200)\n])\nlr_schedule = keras.callbacks.LearningRateScheduler(\n lambda epoch: 1e-7 * 10**(epoch / 30))\noptimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)\nmodel.compile(loss=keras.losses.Huber(),\n optimizer=optimizer,\n metrics=[\"mae\"])\nhistory = model.fit(train_set, epochs=100, callbacks=[lr_schedule])",
"_____no_output_____"
],
[
"plt.semilogx(history.history[\"lr\"], history.history[\"loss\"])\nplt.axis([1e-7, 1e-4, 0, 30])",
"_____no_output_____"
],
[
"keras.backend.clear_session()\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nwindow_size = 30\ntrain_set = seq2seq_window_dataset(x_train, window_size,\n batch_size=128)\nvalid_set = seq2seq_window_dataset(x_valid, window_size,\n batch_size=128)\n\nmodel = keras.models.Sequential([\n keras.layers.SimpleRNN(100, return_sequences=True,\n input_shape=[None, 1]),\n keras.layers.SimpleRNN(100, return_sequences=True),\n keras.layers.Dense(1),\n keras.layers.Lambda(lambda x: x * 200.0)\n])\noptimizer = keras.optimizers.SGD(lr=1e-6, momentum=0.9)\nmodel.compile(loss=keras.losses.Huber(),\n optimizer=optimizer,\n metrics=[\"mae\"])\nearly_stopping = keras.callbacks.EarlyStopping(patience=10)\nmodel.fit(train_set, epochs=500,\n validation_data=valid_set,\n callbacks=[early_stopping])",
"_____no_output_____"
],
[
"rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)\nrnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]",
"_____no_output_____"
],
[
"plt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid)\nplot_series(time_valid, rnn_forecast)",
"_____no_output_____"
],
[
"keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a3af575dde44825344bcf9690f2c19cc44f62 | 1,000 | ipynb | Jupyter Notebook | 12 Queue/12.1 Intro to Queue.ipynb | suhassuhas/Coding-Ninjas---Data-Structures-and-Algorithms-in-Python | e660d5a83b80df9cb67b2d06f2b5ba182586f3da | [
"Unlicense"
] | 4 | 2021-09-09T06:52:31.000Z | 2022-01-09T00:05:11.000Z | 12 Queue/12.1 Intro to Queue.ipynb | rishitbhojak/Coding-Ninjas---Data-Structures-and-Algorithms-in-Python | 3b5625df60f7ac554fae58dc8ea9fd42012cbfae | [
"Unlicense"
] | null | null | null | 12 Queue/12.1 Intro to Queue.ipynb | rishitbhojak/Coding-Ninjas---Data-Structures-and-Algorithms-in-Python | 3b5625df60f7ac554fae58dc8ea9fd42012cbfae | [
"Unlicense"
] | 5 | 2021-09-15T13:49:32.000Z | 2022-01-20T20:37:46.000Z | 22.222222 | 104 | 0.551 | [
[
[
"### Queue: follows FIFO mechanism\n\n1) Enqueue() - Inserting an element to the queue\n\n2) Dequeue() - Deleting an element from the queue\n\n3) size() - returns the size of the queue\n\n4) isEmpty() - returns queue is empty or not\n\n5) front() - returns front element of queue\n\n**Elements are inserted at the rear part of the queue and elements are removed from the front end.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
cb5a3f355f80282b166bb93a735d37314935c6cb | 131,919 | ipynb | Jupyter Notebook | HeroesOfPymoli/.ipynb_checkpoints/HeroesOfPymoli_analysis-checkpoint.ipynb | zenfinity/pandas-challenge | ab906f0c006d8eb6b0c97b08b64401c5e214c30c | [
"ADSL"
] | null | null | null | HeroesOfPymoli/.ipynb_checkpoints/HeroesOfPymoli_analysis-checkpoint.ipynb | zenfinity/pandas-challenge | ab906f0c006d8eb6b0c97b08b64401c5e214c30c | [
"ADSL"
] | null | null | null | HeroesOfPymoli/.ipynb_checkpoints/HeroesOfPymoli_analysis-checkpoint.ipynb | zenfinity/pandas-challenge | ab906f0c006d8eb6b0c97b08b64401c5e214c30c | [
"ADSL"
] | null | null | null | 33.507493 | 622 | 0.404923 | [
[
[
"### Heroes Of Pymoli Data Analysis\n* Of the 1163 active players, the vast majority are male (84%). There also exists, a smaller, but notable proportion of female players (14%).\n\n* Our peak age demographic falls between 20-24 (44.8%) with secondary groups falling between 15-19 (18.60%) and 25-29 (13.4%). \n-----",
"_____no_output_____"
],
[
"### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport pandas as pd\n\n# File to Load (Remember to Change These)\nfile_to_load = \"Resources/purchase_data.csv\"\n\n# Read Purchasing File and store into Pandas data frame\npurchase_data_df = pd.read_csv(file_to_load)",
"_____no_output_____"
],
[
"#What are the columns in teh data file\npurchase_data_df.columns",
"_____no_output_____"
]
],
[
[
"## Player Count",
"_____no_output_____"
],
[
"* Display the total number of players\n",
"_____no_output_____"
]
],
[
[
"playerCountTotal = purchase_data_df['SN'].nunique()\nplayerCountTotal",
"_____no_output_____"
],
[
"#Make sure set has no duplicates using nunique and return count, display in a dataframe\nplayerCountTotal_df = pd.DataFrame({'Total Players' : [playerCountTotal]})\nplayerCountTotal_df",
"_____no_output_____"
]
],
[
[
"## Purchasing Analysis (Total)",
"_____no_output_____"
],
[
"* Run basic calculations to obtain number of unique items, average price, etc.\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame\n",
"_____no_output_____"
]
],
[
[
"#Unique Items\nitemsTotal = purchase_data_df['Item ID'].nunique()\nitemsTotal",
"_____no_output_____"
],
[
"#Average Price\nitemsAvgPrice = purchase_data_df['Price'].mean()\nitemsAvgPrice",
"_____no_output_____"
],
[
"#Number Purchases\nitemsNumberPurchases = purchase_data_df['Purchase ID'].nunique()\nitemsNumberPurchases",
"_____no_output_____"
],
[
"itemsTotalRevenue = purchase_data_df['Price'].sum()\nitemsTotalRevenue",
"_____no_output_____"
],
[
"#Summary Table\n#Not able to get this format working 'Average Price' : [purchase_data_df['Mean'].astype(float).map(\"${:,.0f}\".format)],\n#hosted_in_us_df[\"average_donation\"] = hosted_in_us_df[\"average_donation\"].astype(float).map(\"${:,.2f}\".format)\npurchase_summary_df = pd.DataFrame({'Total Players' : [playerCountTotal],\n 'Average Price' : [itemsAvgPrice],\n 'Number Purchases' : [itemsNumberPurchases],\n 'Total Revenue' : [itemsTotalRevenue]})\npurchase_summary_df",
"_____no_output_____"
],
[
"#Make it pretty\n#purchase_summary_df['Average Price'] = purchase_summary_df['Average Price'].astype(float).map(\"${:,.2f}\".format)\n#purchase_summary_df['Total Revenue'] = purchase_summary_df['Total Revenue'].astype(float).map(\"${:,.2f}\".format)\nformat_dict = {'Average Price':'${0:,.2f}', 'Total Revenue':'${0:,.2f}'}\npurchase_summary_df.style.format(format_dict)\n#purchase_summary_df",
"_____no_output_____"
]
],
[
[
"## Gender Demographics",
"_____no_output_____"
],
[
"* Percentage and Count of Male Players\n\n\n* Percentage and Count of Female Players\n\n\n* Percentage and Count of Other / Non-Disclosed\n\n\n",
"_____no_output_____"
]
],
[
[
"#clean data of nulls \npurchase_data_df.count()",
"_____no_output_____"
],
[
"cleaned_purchase_data_df = purchase_data_df.dropna(how='all')\ncleaned_purchase_data_df.head()",
"_____no_output_____"
],
[
"#clean data of duplicates to get accurate count of only gender and players\ngender_purchase_data_df = cleaned_purchase_data_df.loc[:, ['SN','Gender']]\ngender_purchase_data_df.head()",
"_____no_output_____"
],
[
"gender_purchase_data_df.drop_duplicates(inplace = True)\ngenderSummary_df = pd.DataFrame(gender_purchase_data_df['Gender'].value_counts())\ngenderSummary_df",
"_____no_output_____"
],
[
"#Rename column\ngenderSummary_df = genderSummary_df.rename(columns={'Gender': 'Total Count'})\ngenderSummary_df",
"_____no_output_____"
],
[
"#Calc and display percentage\ngender_total_people = genderSummary_df.sum()\ngenderSummary_df['Percentage'] = genderSummary_df/gender_total_people*100\ngenderSummary_df\n\n\n",
"_____no_output_____"
],
[
"#Make it pretty\n#genderSummary_df['Percentage'] = genderSummary_df['Percentage'].astype(float).map(\"{:,.2f}%\".format)\n#genderSummary_df\nformat_dictPercentage = {'Percentage':'{0:,.2f}%'}\ngenderSummary_df.style.format(format_dictPercentage)",
"_____no_output_____"
]
],
[
[
"\n## Purchasing Analysis (Gender)",
"_____no_output_____"
],
[
"* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender\n\n\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame",
"_____no_output_____"
]
],
[
[
"#purchase count, avg. purchase price, avg. purchase total per person etc. by gender\n\n#Need to rename, and it's not working...?\ngenderPurchaseCount = purchase_data_df.loc[:,['Purchase ID', 'Gender']]\n#genderPurchaseCount.rename(columns={'Purchase ID': ['Total Purchases'], 'Gender' : 'Gender'})\n#print(genderPurchaseCount.columns)\ngenderPurchaseCount.groupby('Gender').count()\n\n",
"_____no_output_____"
],
[
"#Average purchase price by gender is all purchases averaged and grouped by Gender\ngenderPurchaseAvgPrice = purchase_data_df.loc[:,['Price', 'Gender']]\ngenderPurchaseAvgPrice = genderPurchaseAvgPrice.groupby('Gender').mean()\ngenderPurchaseAvgPrice\n",
"_____no_output_____"
],
[
"#Average purchase per person is sum purchase prices for each person divided by the number of purchases they made\n#Need to get sum of transactions per person, and count each\nprint(gender_purchase_data_df.columns)",
"Index(['SN', 'Gender'], dtype='object')\n"
],
[
"genderNumberPurPerPerson = purchase_data_df.groupby('SN')\ngenderNumberPurPerPerson = genderNumberPurPerPerson.count()\ngenderNumberPurPerPerson",
"_____no_output_____"
],
[
"genderTotPurPerson = purchase_data_df.groupby('SN')\ngenderTotPurPerson = genderTotPurPerson['Price'].sum()\ngenderTotPurPerson",
"_____no_output_____"
],
[
"gender_purchase_data_df_merged = pd.merge(gender_purchase_data_df,genderNumberPurPerPerson, on='SN')\ngender_purchase_data_df_merged = pd.merge(gender_purchase_data_df_merged, genderTotPurPerson, on= 'SN')\ngender_purchase_data_df_merged.head()\n",
"_____no_output_____"
],
[
"genderPurchaseAvgTot = gender_purchase_data_df_merged.groupby('Gender_x')\ngenderPurchaseAvgTot = genderPurchaseAvgTot['Price_y'].mean()\ngenderPurchaseAvgTot\n",
"_____no_output_____"
],
[
"#add calcs to summary\ngenderSummary_df['Avg Price']=genderPurchaseAvgPrice\ngenderSummary_df.head()",
"_____no_output_____"
],
[
"genderSummary_df['Avg Total']=genderPurchaseAvgTot\ngenderSummary_df.head()\n",
"_____no_output_____"
],
[
"#Make it purdy\n#genderSummary_df['Avg Price'] = genderSummary_df['Avg Price'].astype(float).map(\"${:,.2f}\".format)\n#genderSummary_df['Avg Total'] = genderSummary_df['Avg Total'].astype(float).map(\"${:,.2f}\".format)\nformat_dictGenderPurSummary = {'Percentage':'{0:,.2f}%', 'Avg Price':'${0:,.2f}', 'Avg Total':'${0:,.2f}'}\ngenderSummary_df.style.format(format_dictGenderPurSummary)\n#genderSummary_df",
"_____no_output_____"
]
],
[
[
"## Age Demographics",
"_____no_output_____"
],
[
"* Establish bins for ages\n\n\n* Categorize the existing players using the age bins. Hint: use pd.cut()\n\n\n* Calculate the numbers and percentages by age group\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: round the percentage column to two decimal points\n\n\n* Display Age Demographics Table\n",
"_____no_output_____"
]
],
[
[
"#Create bins and labels\nageBins = [0,10,14,19,24,29,34,39,40]\nageBinLabels = ['<10', '10-14','15-19','20-24','25-29','30-34','35-39','40+']\npd.cut(purchase_data_df['Age'],ageBins,labels=ageBinLabels).head()",
"_____no_output_____"
],
[
"#Add a column to the the main import df with brackets\npurchase_data_df['Age Bracket'] = pd.cut(purchase_data_df['Age'],ageBins,labels=ageBinLabels)\npurchase_data_df.head()",
"_____no_output_____"
],
[
"#New df for age analysis\nage_df = purchase_data_df.loc[:,['SN','Age','Age Bracket','Price']]\n\n#Df for count of age brackets, and we want individual players, so drop the duplicates\nage_df = age_df.sort_index()\nage_df = age_df.drop_duplicates('SN')\nage_df.count()",
"_____no_output_____"
],
[
"\nageBracketCount = age_df.groupby('Age Bracket')\nageBracketCount = ageBracketCount.count()\nageBracketCount = ageBracketCount.rename(columns={'SN': 'Total Count'})\ndel ageBracketCount['Age']\ndel ageBracketCount['Price']\nageBracketCount",
"_____no_output_____"
],
[
"#Df for percentage...The numbers don't look right above in the brackets\n\n\n#ageBracketPercent = age_df.groupby('Age Bracket')\nageBracketPercent = pd.DataFrame(ageBracketCount/ageBracketCount.sum()*100)\nageBracketPercent = ageBracketPercent.rename(columns={'Total Count' : 'Percent'})\n#del ageBracketCount['Age']\n#del ageBracketCount['Price']\nageBracketPercent\n ",
"_____no_output_____"
],
[
"ageSummary_df = ageBracketCount \nageSummary_df['Percent'] = ageBracketPercent\nageSummary_df",
"_____no_output_____"
],
[
"#Make purdy\n#ageSummary_df['Percent'] = ageSummary_df['Percent'].astype(float).map(\"{:,.2f}%\".format)\nformat_dictAgeSummary = {'Percent' : '{:,.2f}%' }\nageSummary_df.style.format(format_dictAgeSummary)",
"_____no_output_____"
]
],
[
[
"## Purchasing Analysis (Age)",
"_____no_output_____"
],
[
"* Bin the purchase_data data frame by age\n\n\n* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame",
"_____no_output_____"
]
],
[
[
"#Purchase Count\tAverage Purchase Price\tTotal Purchase Value\tAvg Total Purchase per Person\n\n#New df for age Purchase analysis\nagePurchase_df = purchase_data_df.loc[:,['Purchase ID','Age','Age Bracket','Price','SN']]\nagePurchase_df.count()\n",
"_____no_output_____"
],
[
"#Df for Purchase Count\nagePurchaseCount = agePurchase_df.groupby('Age Bracket')\nagePurchaseCount = agePurchaseCount.count()\nagePurchaseCount = agePurchaseCount.rename(columns={'Purchase ID' : 'Purchase Count'})\ndel agePurchaseCount['Age']\ndel agePurchaseCount['Price']\ndel agePurchaseCount['SN']\nagePurchaseCount",
"_____no_output_____"
],
[
"#Df for Average Purchase Price\nagePurchaseAvg = agePurchase_df.groupby('Age Bracket')\nagePurchaseAvg = agePurchaseAvg.mean()\nagePurchaseAvg = agePurchaseAvg.rename(columns={'Price' : 'Purchase Avg'})\ndel agePurchaseAvg['Age']\ndel agePurchaseAvg['Purchase ID']\nagePurchaseAvg\n",
"_____no_output_____"
],
[
"#Df for Total Purchase Value\nagePurchaseTotVal = agePurchase_df.groupby('Age Bracket')\nagePurchaseTotVal = agePurchaseTotVal.sum()\nagePurchaseTotVal = agePurchaseTotVal.rename(columns={'Price' : 'Total Value'})\ndel agePurchaseTotVal['Age']\ndel agePurchaseTotVal['Purchase ID']\nagePurchaseTotVal",
"_____no_output_____"
],
[
"#Df for Avg Total Purchase per Person, which is sum of Price / number of purchases, which is groupby SN for calcs...\n\nPerPerson = agePurchase_df.groupby('SN')\n#print(PerPerson.sum())\n#print(PerPerson.count())\nagePurchaseAvgTot = PerPerson.sum()/PerPerson.count()\n\nagePurchaseAvgTot = agePurchaseAvgTot.rename(columns={'Price' : 'Average Total'})\ndel agePurchaseAvgTot['Age']\ndel agePurchaseAvgTot['Purchase ID']\ndel agePurchaseAvgTot['Age Bracket']\nagePurchaseAvgTot\n",
"_____no_output_____"
],
[
"#then merge as new column to df that's had dups removed, then do a new group by Age Brackets\nagePerPerson = agePurchase_df.drop_duplicates('SN')\nagePerPerson = pd.merge(agePerPerson, agePurchaseAvgTot, on='SN')\nagePurchaseAvgTotByAge = agePerPerson.groupby('Age Bracket')\nagePurchaseAvgTotByAge = agePurchaseAvgTotByAge.mean()\ndel agePurchaseAvgTotByAge['Age']\ndel agePurchaseAvgTotByAge['Purchase ID']\ndel agePurchaseAvgTotByAge['Price']\nagePurchaseAvgTotByAge",
"_____no_output_____"
],
[
"#Summary df\nagePurchaseSummary = agePurchaseCount\nagePurchaseSummary['Purchase Avg'] = agePurchaseAvg\nagePurchaseSummary['Total Value'] = agePurchaseTotVal\nagePurchaseSummary['Average Total'] = agePurchaseAvgTotByAge\n#Make purdy\nformat_dictAgePurSummary = {'Purchase Avg' : '${:,.2f}','Total Value': '${:,.2f}','Average Total':'${:,.2f}' }\n\nagePurchaseSummary.style.format(format_dictAgePurSummary)",
"_____no_output_____"
]
],
[
[
"## Top Spenders",
"_____no_output_____"
],
[
"* Run basic calculations to obtain the results in the table below\n\n\n* Create a summary data frame to hold the results\n\n\n* Sort the total purchase value column in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the summary data frame\n\n",
"_____no_output_____"
]
],
[
[
"#Purchase Count\tAverage Purchase Price\tTotal Purchase Value\n#Add a column to per person df that counts transactions...\nPerPersonTot = PerPerson.sum()\nPerPersonCount = PerPerson.count()\n\ndel PerPersonCount['Purchase ID']\ndel PerPersonCount['Age']\ndel PerPersonCount['Age Bracket']\nPerPersonCount = PerPersonCount.rename(columns={'Price' : 'Total Transactions'})\n\ndel PerPersonTot['Purchase ID']\ndel PerPersonTot['Age']\n\nPerPersonTot = PerPersonTot.rename(columns={'Price' : 'Total Value'})\nPerPersonTot\nPerPerson_main = pd.merge(agePerPerson,PerPersonCount, on='SN')\nPerPerson_main = pd.merge(PerPerson_main,PerPersonTot, on='SN')\nPerPerson_main\n\n\n\n",
"_____no_output_____"
],
[
"#...then sort by Total Transactions\nPerPerson_main_sorted = PerPerson_main.sort_values(by=['Total Transactions'], ascending=False)\nTopSpenders = PerPerson_main_sorted.iloc[0:5,4:8]\nTopSpenders = TopSpenders.set_index('SN')\n#TopSpenders['Average Total'] = TopSpenders['Average Total'].astype(float).map(\"${:,.2f}\".format)\n#TopSpenders['Total Value'] = TopSpenders['Total Value'].astype(float).map(\"${:,.2f}\".format)\nformat_dictTopSpenders = {'Average Total' : '${:,.2f}','Total Value': '${:,.2f}'}\n\nTopSpenders.style.format(format_dictTopSpenders)\n",
"_____no_output_____"
]
],
[
[
"## Most Popular Items",
"_____no_output_____"
],
[
"* Retrieve the Item ID, Item Name, and Item Price columns\n\n\n* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value\n\n\n* Create a summary data frame to hold the results\n\n\n* Sort the purchase count column in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the summary data frame\n\n",
"_____no_output_____"
]
],
[
[
"#Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value\n#purchase_data_df.columns\n#Retrieve the Item ID, Item Name, and Item Price columns\nitems = purchase_data_df.loc[:, ['Item ID','Item Name','Price']]\nitemsByID = items.groupby('Item ID')\nitemsByName = items.groupby('Item Name')\n\n#Item purchase count\nitemsByID_count = itemsByID.count()\ndel itemsByID_count['Item Name']\nitemsByID_count = itemsByID_count.rename(columns={'Price': 'Item Count'})\nitemsByID_count\n\n\n",
"_____no_output_____"
],
[
"#Item Price\nitemsByPrice = items.drop_duplicates()\nitemsByPrice\n",
"_____no_output_____"
],
[
"#Total purchase value\nitemsPurVal = itemsByID.sum()\n#del itemsPurVal['Item Name']\nitemsPurVal = itemsPurVal.rename(columns={'Price': 'Total Purchase Value'})\nitemsPurVal",
"_____no_output_____"
],
[
"items_main = pd.merge(itemsByPrice,itemsByID_count, on='Item ID', how='outer')\nitems_main = pd.merge(items_main,itemsPurVal, on='Item ID', how='outer')\nitems_main",
"_____no_output_____"
],
[
"#Create a summary data frame to hold the results\n#Sort the purchase count column in descending order\n\nitems_main_sorted = items_main.sort_values(by='Item Count', ascending=False)\nitems_main_sorted = items_main_sorted.drop_duplicates('Item Name')\n\nPopularItems = items_main_sorted.iloc[0:5,1:6]\nPopularItems = PopularItems.set_index('Item Name')\n\n#Optional: give the displayed data cleaner formatting\n\n#PopularItems['Price'] = PopularItems['Price'].astype(float).map(\"${:,.2f}\".format)\n#PopularItems['Total Purchase Value'] = PopularItems['Total Purchase Value'].astype(float).map(\"${:,.2f}\".format)\nformat_dictPopularItems = {'Price' : '${:,.2f}','Total Purchase Value' : '${:,.2f}' }\nPopularItems.style.format(format_dictPopularItems)\n\n\n\n\n#Display a preview of the summary data frame",
"_____no_output_____"
]
],
[
[
"## Most Profitable Items",
"_____no_output_____"
],
[
"* Sort the above table by total purchase value in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the data frame\n\n",
"_____no_output_____"
]
],
[
[
"ProfitableItems = PopularItems.sort_values(by='Total Purchase Value', ascending=False)\n#ProfitableItems['Total Purchase Value'] = ProfitableItems['Total Purchase Value'].astype(float).map(\"${:,.2f}\".format)\nformat_dictProfitableItems = {'Price' : '${:,.2f}','Total Purchase Value' : '${:,.2f}' }\nProfitableItems.style.format(format_dictProfitableItems)\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
cb5a51cd591af5e75c18b69be9b2d16b241cc997 | 5,469 | ipynb | Jupyter Notebook | blogScrape.ipynb | GilesStrong/AMVA4NP_BlogGen | 1d831d9db63a612cdc709e53c1453eb709d9158e | [
"MIT"
] | 1 | 2019-05-14T21:31:08.000Z | 2019-05-14T21:31:08.000Z | blogScrape.ipynb | GilesStrong/AMVA4NP_BlogGen | 1d831d9db63a612cdc709e53c1453eb709d9158e | [
"MIT"
] | null | null | null | blogScrape.ipynb | GilesStrong/AMVA4NP_BlogGen | 1d831d9db63a612cdc709e53c1453eb709d9158e | [
"MIT"
] | null | null | null | 26.548544 | 278 | 0.501189 | [
[
[
"from bs4 import BeautifulSoup\nimport glob\nfrom six.moves import cPickle as pickle",
"_____no_output_____"
],
[
"def scrapePost(name, onlyAuthor=False):\n with open(name) as infile:\n scrape = infile.read()\n author = scrape[scrape.rfind('<h4>')+4:scrape.rfind('</h4>')]\n if onlyAuthor:\n return author\n content = scrape[scrape.find('<div class=\"post-content clear\">'):scrape.find('</div><!--/.post-content-->')]\n blog = BeautifulSoup(content, 'html.parser')\n content = blog.get_text().split('\\n')\n if 'by ' in content[1]:\n post = content[2:-3]\n else:\n post = content[1:-3]\n mergedPost = ''\n for i in post:\n if i != '':\n mergedPost += ' ' + i\n return author, mergedPost",
"_____no_output_____"
],
[
"base = 'data/amva4newphysics.wordpress.com/'\nauthors = []\nfor year in ['2015', '2016', '2017']:\n posts = glob.glob(base + year + '/*/*/*/index.html')\n print '{} post found in year {}'.format(len(posts), year)\n for post in posts:\n if '/page/' in post:\n continue\n author = scrapePost(post, True)\n if len(author) >= 20:\n print author\n print \"Error in: \", post\n else:\n authors.append(author)",
"8 post found in year 2015\n149 post found in year 2016\n127 post found in year 2017\n"
],
[
"print set(authors)",
"set(['sabinehe', 'sengpei', 'Dr. Markus Stoye', 'dorigo', 'Anna Stakia', 'GilesStrong', 'fabriciojm', 'josaitis', 'Pablo de Castro', 'ioannapapavergou', 'Pietro Vischia', 'ceciliatosciri', 'amva4np', 'alesaggio', 'Andrea Giammanco', 'alexanderheld', 'Greg Kotkowski'])\n"
],
[
"blog = {}\nfor author in authors:\n blog[author] = ''",
"_____no_output_____"
],
[
"for year in ['2015', '2016', '2017']:\n posts = glob.glob(base + year + '/*/*/*/index.html')\n print '{} post found in year {}'.format(len(posts), year)\n for post in posts:\n if '/page/' in post:\n continue\n author, content = scrapePost(post)\n if len(content) < 20:\n print content\n print \"Error in: \", post\n blog[author] += content",
"8 post found in year 2015\n149 post found in year 2016\n127 post found in year 2017\n"
],
[
"print \"Author\\tnumber of characters,\\twords\"\nfor author in blog:\n print \"{}\\t\\t{}\\t{}\".format(author, len(blog[author]), len(blog[author].split(' ')))",
"Author\tnumber of characters,\twords\nsabinehe\t\t8706\t1384\nsengpei\t\t10637\t1690\nDr. Markus Stoye\t\t2518\t421\ndorigo\t\t201718\t35035\nAnna Stakia\t\t38442\t6037\nGilesStrong\t\t145807\t24451\nfabriciojm\t\t32398\t5437\njosaitis\t\t5473\t913\nPablo de Castro\t\t104523\t17027\nioannapapavergou\t\t15307\t2613\nPietro Vischia\t\t52038\t9016\nceciliatosciri\t\t37301\t6231\namva4np\t\t245515\t41297\nalesaggio\t\t68032\t11595\nAndrea Giammanco\t\t47939\t8037\nalexanderheld\t\t15016\t2529\nGreg Kotkowski\t\t85555\t14515\n"
],
[
"for author in blog:\n with open(author + '.pkl', 'w') as fout:\n pickle.dump(blog[author], fout) ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a5bcd3bb573dba7711b47b6f7c6ab909df288 | 515,757 | ipynb | Jupyter Notebook | P1.ipynb | narulakartik/FindLaneLines | 196a2ceb1ff7e94a5fc622276568950d87442579 | [
"MIT"
] | null | null | null | P1.ipynb | narulakartik/FindLaneLines | 196a2ceb1ff7e94a5fc622276568950d87442579 | [
"MIT"
] | null | null | null | P1.ipynb | narulakartik/FindLaneLines | 196a2ceb1ff7e94a5fc622276568950d87442579 | [
"MIT"
] | null | null | null | 625.918689 | 150,740 | 0.944268 | [
[
[
"# Self-Driving Car Engineer Nanodegree\n\n\n## Project: **Finding Lane Lines on the Road** \n***\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \n\nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.\n\n---\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\n\n**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n\n---",
"_____no_output_____"
],
[
"**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**\n\n---\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n <p></p> \n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>",
"_____no_output_____"
],
[
"**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ",
"_____no_output_____"
],
[
"## Import Packages",
"_____no_output_____"
]
],
[
[
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Read in an Image",
"_____no_output_____"
]
],
[
[
"#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
],
[
[
"## Ideas for Lane Detection Pipeline",
"_____no_output_____"
],
[
"**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**\n\n`cv2.inRange()` for color selection \n`cv2.fillPoly()` for regions selection \n`cv2.line()` to draw lines on an image given endpoints \n`cv2.addWeighted()` to coadd / overlay two images\n`cv2.cvtColor()` to grayscale or change color\n`cv2.imwrite()` to output images to file \n`cv2.bitwise_and()` to apply a mask to an image\n\n**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**",
"_____no_output_____"
],
[
"## Helper Functions",
"_____no_output_____"
],
[
"Below are some helper functions to help get you started. They should look familiar from the lesson!",
"_____no_output_____"
]
],
[
[
"import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n `vertices` should be a numpy array of integer points.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=2):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n \n leftslope=[]\n rightslope=[]\n x_left=[]\n y_left=[]\n x_right=[]\n y_right=[]\n minslope=0.1\n maxslope=1000\n for line in lines:\n for x1,y1,x2,y2 in line:\n slope=(y2-y1)/(x2-x1)\n if abs(slope)<minslope or abs(slope)>maxslope:\n continue\n if slope>0:\n leftslope.append(slope)\n x_left.append(x1)\n y_left.append(y1)\n else :\n rightslope.append(slope)\n x_right.append(x1)\n y_right.append(y1)\n \n ysize, xsize = img.shape[0], img.shape[1]\n XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize))\n \n \n region_top_y = np.amin(YY) \n \n newline=[] \n \n if len(leftslope)>0 : \n \n meanslope=np.mean(leftslope)\n x_mean=np.mean(x_left)\n y_mean=np.mean(y_left)\n b=y_mean-meanslope*x_mean\n x1 = int((region_top_y - b)/meanslope)\n x2 = int((ysize - b)/meanslope)\n \n newline.append([(x1, region_top_y, x2, ysize)]) \n \n \n \n if len(rightslope)>0: \n \n meanslope=np.mean(rightslope)\n x_mean=np.mean(x_right)\n y_mean=np.mean(y_right)\n \n b=y_mean-meanslope*x_mean\n x1 = int((region_top_y - b)/meanslope)\n x2 = int((ysize - b)/meanslope)\n \n newline.append([(x1, region_top_y, x2, ysize)]) \n \n \n \n \n \n for line in newline:\n for x1,y1,x2,y2 in line:\n cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n \n \n \n \n \n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., γ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + γ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, γ)",
"_____no_output_____"
]
],
[
[
"## Test Images\n\nBuild your pipeline to work on the images in the directory \"test_images\" \n**You should make sure your pipeline works well on these images before you try the videos.**",
"_____no_output_____"
]
],
[
[
"import os\na=os.listdir(\"test_images/\")",
"_____no_output_____"
]
],
[
[
"## Build a Lane Finding Pipeline\n\n",
"_____no_output_____"
],
[
"Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.\n\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.",
"_____no_output_____"
]
],
[
[
"# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images_output directory.\n\n#step 1: grayscale the image\ngray=grayscale(image)\nplt.imshow(gray)\n\n\nvertices=np.array([(500,300),(100,550), (900,550)])\nregion=region_of_interest(gray, np.int32([vertices]))\n#plt.imshow(region)\n\n\n#step 2: edge detection\ncanny_image=canny(gray, 10 ,150)\nplt.imshow(canny_image)\n\n#step 3: remove gaussian noise\n#noise_removed=gaussian_blur\ngaussian=gaussian_blur(canny_image,5)\nplt.imshow(gaussian)\n\n#step4: region masking\nregion=region_of_interest(gaussian, np.int32([vertices]))\n\n#step 5: draw haugh lines\nhough=hough_lines(region, 2, np.pi/180, 34, 10, 5)\nplt.imshow(hough)\n\n#step 6: final weighted image with lines\nfinal=weighted_img(hough, image)\nplt.imshow(final)",
"_____no_output_____"
]
],
[
[
"## Test on Videos\n\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\n\nWe can test our solution on two provided videos:\n\n`solidWhiteRight.mp4`\n\n`solidYellowLeft.mp4`\n\n**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**\n\n**If you get an error that looks like this:**\n```\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\n```\n**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**",
"_____no_output_____"
]
],
[
[
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML",
"_____no_output_____"
],
[
"def process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image where lines are drawn on lanes)\n #step 1: grayscale the image\n gray=grayscale(image)\n \n\n\n vertices=np.array([(500,300),(100,550), (900,550)])\n region=region_of_interest(gray, np.int32([vertices]))\n\n\n\n #step 2: edge detection\n canny_image=canny(gray, 10 ,150)\n plt.imshow(canny_image)\n\n #step 3: remove gaussian noise\n #noise_removed=gaussian_blur\n gaussian=gaussian_blur(canny_image,5)\n plt.imshow(gaussian)\n\n #step4: region masking\n region=region_of_interest(gaussian, np.int32([vertices]))\n\n #step 5: draw haugh lines\n hough=hough_lines(region, 2, np.pi/180, 15, 10, 70)\n plt.imshow(hough)\n\n #step 6: final weighted image with lines\n final=weighted_img(hough, image)\n plt.imshow(final)\n\n return final",
"_____no_output_____"
]
],
[
[
"Let's try the one with the solid white lane on the right first ...",
"_____no_output_____"
]
],
[
[
"white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)",
"[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4\n[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4\n"
]
],
[
[
"Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.",
"_____no_output_____"
]
],
[
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))",
"_____no_output_____"
]
],
[
[
"## Improve the draw_lines() function\n\n**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".**\n\n**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**",
"_____no_output_____"
],
[
"Now for the one with the solid yellow lane on the left. This one's more tricky!",
"_____no_output_____"
]
],
[
[
"yellow_output = 'test_videos_output/solidYellowLeft.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)",
"[MoviePy] >>>> Building video test_videos_output/solidYellowLeft.mp4\n[MoviePy] Writing video test_videos_output/solidYellowLeft.mp4\n"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))",
"_____no_output_____"
]
],
[
[
"## Writeup and Submission\n\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.\n",
"_____no_output_____"
],
[
"## Optional Challenge\n\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!",
"_____no_output_____"
]
],
[
[
"challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)\n",
"[MoviePy] >>>> Building video test_videos_output/challenge.mp4\n[MoviePy] Writing video test_videos_output/challenge.mp4\n"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
cb5a643666b4219b28d4934c1f7437b480d29a0e | 101,156 | ipynb | Jupyter Notebook | 01. Tensorflow v2/Tensorflow v2.ipynb | pranavraikote/Tensorflow-Tutorials | d07f9e9d5452f2f09a8a4363e0b2240ba3bd339a | [
"MIT"
] | 3 | 2020-12-14T17:03:05.000Z | 2022-01-31T16:50:58.000Z | 01. Tensorflow v2/Tensorflow v2.ipynb | pranavraikote/Tensorflow-Tutorials | d07f9e9d5452f2f09a8a4363e0b2240ba3bd339a | [
"MIT"
] | null | null | null | 01. Tensorflow v2/Tensorflow v2.ipynb | pranavraikote/Tensorflow-Tutorials | d07f9e9d5452f2f09a8a4363e0b2240ba3bd339a | [
"MIT"
] | 1 | 2020-07-05T10:34:05.000Z | 2020-07-05T10:34:05.000Z | 64.266836 | 32,408 | 0.795642 | [
[
[
"# Introduction to TensorFlow v2 : Basics",
"_____no_output_____"
],
[
"### Importing and printing the versions",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\nprint(\"TensorFlow version: {}\".format(tf.__version__))\nprint(\"Eager execution is: {}\".format(tf.executing_eagerly()))\nprint(\"Keras version: {}\".format(tf.keras.__version__))",
"TensorFlow version: 2.3.1\nEager execution is: True\nKeras version: 2.4.0\n"
]
],
[
[
"### TensorFlow Variables\n\n[Tensors](https://www.tensorflow.org/guide/tensor) are multi-dimensional arrays in TensorFlow. But, Tensors are immutable in nature. [Variables](https://www.tensorflow.org/guide/variable) are a way to store data which can be manipulated and changed easily. Variables are automatically placed on the fastest compatible device for it's datatype. For ex: If GPU is found, the tensors are automatically placed on GPU directly. ",
"_____no_output_____"
]
],
[
[
"var = 1\n\n# Defining a Tensorflow Variables\nten = tf.Variable(7) \nanother_tensor = tf.Variable([[1, 2],[3, 4]]) ",
"_____no_output_____"
],
[
"var, ten, another_tensor",
"_____no_output_____"
]
],
[
[
"### Creating new Variables",
"_____no_output_____"
]
],
[
[
"f1 = tf.Variable(100.6)\nprint(f1)",
"<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=100.6>\n"
]
],
[
[
"### Assigning values to existing Variables",
"_____no_output_____"
]
],
[
[
"# Assign and print the Data-Type\nprint(f1.assign(25))\nprint(f1.dtype)",
"<tf.Variable 'UnreadVariable' shape=() dtype=float32, numpy=25.0>\n<dtype: 'float32'>\n"
],
[
"f2 = tf.Variable(7, dtype = tf.float64)\nprint(f2.dtype)",
"<dtype: 'float64'>\n"
],
[
"# Creating a TensorFlow constant - Value cannot be changed in future\nconstant_var = tf.constant(10)\nprint(constant_var)",
"tf.Tensor(10, shape=(), dtype=int32)\n"
]
],
[
[
"### Extracting the value from a Tensor and formatting like a Numpy array using .numpy()",
"_____no_output_____"
]
],
[
[
"constant_var.numpy()",
"_____no_output_____"
]
],
[
[
"### Rank and Shape of Tensor",
"_____no_output_____"
],
[
"About [Rank and Shape](https://www.tensorflow.org/guide/tensor#about_shapes) in TensorFlow",
"_____no_output_____"
]
],
[
[
"tf.rank(another_tensor)",
"_____no_output_____"
],
[
"tf.shape(another_tensor)",
"_____no_output_____"
],
[
"new_tensor = tf.Variable([ [ [0., 1., 2.], [3., 4., 5.] ], [ [6., 7., 8.], [9., 10., 11.] ] ]) \nprint(new_tensor.shape)\nprint(tf.rank(new_tensor))",
"(2, 2, 3)\ntf.Tensor(3, shape=(), dtype=int32)\n"
]
],
[
[
"### Reshaping Tensors",
"_____no_output_____"
]
],
[
[
"new_reshape = tf.reshape(new_tensor, [2, 6]) \nrecent_reshape = tf.reshape(new_tensor, [1, 12])",
"_____no_output_____"
],
[
"print(new_reshape)\nprint(recent_reshape)",
"tf.Tensor(\n[[ 0. 1. 2. 3. 4. 5.]\n [ 6. 7. 8. 9. 10. 11.]], shape=(2, 6), dtype=float32)\ntf.Tensor([[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.]], shape=(1, 12), dtype=float32)\n"
]
],
[
[
"### Broadcasting Feature",
"_____no_output_____"
]
],
[
[
"new_tensor + 4",
"_____no_output_____"
],
[
"new_tensor - 4",
"_____no_output_____"
],
[
"new_tensor * 4",
"_____no_output_____"
]
],
[
[
"### Matrix Multiplication",
"_____no_output_____"
]
],
[
[
"new_tensor * new_tensor",
"_____no_output_____"
],
[
"u = tf.constant([[5, 6, 7]])\nv = tf.constant([[8, 9, 0]])\nprint('Matrix Multiplication - Transpose')\nprint(tf.matmul(u, tf.transpose(a=v)))",
"Matrix Multiplication - Transpose\ntf.Tensor([[94]], shape=(1, 1), dtype=int32)\n"
]
],
[
[
"### Type Casting",
"_____no_output_____"
]
],
[
[
"int_tensor = tf.cast(ten, dtype=tf.float32)\nprint(int_tensor)",
"tf.Tensor(7.0, shape=(), dtype=float32)\n"
]
],
[
[
"### Arithmetic Operations",
"_____no_output_____"
]
],
[
[
"a = tf.random.normal(shape=(2, 2))\nb = tf.random.normal(shape=(2, 2))\n\nc = a + b\nd = tf.square(c)\ne = tf.exp(d)\n\nprint('Addition - {}'.format(c))\nprint('Square Root - {}'.format(d))\nprint('Exponent - {}'.format(e))",
"Addition - [[ 2.152709 -1.7924592]\n [ 2.1677308 2.0514646]]\nSquare Root - [[4.634156 3.2129102]\n [4.6990566 4.208507 ]]\nExponent - [[102.941025 24.851303]\n [109.8435 67.25606 ]]\n"
]
],
[
[
"# TensorFlow v2 Functions",
"_____no_output_____"
],
[
"### Squared Difference Function",
"_____no_output_____"
]
],
[
[
"#Squared Difference Function\nx = [2, 4, 6, 8, 12]\ny = 6\n\n#(x-y)*(x-y)\nresult = tf.math.squared_difference(x, y)\nresult",
"_____no_output_____"
]
],
[
[
"### Reduce Mean",
"_____no_output_____"
]
],
[
[
"numbers = tf.constant([[6., 9.], [3., 5.]])\nprint(numbers)",
"tf.Tensor(\n[[6. 9.]\n [3. 5.]], shape=(2, 2), dtype=float32)\n"
],
[
"tf.reduce_mean(input_tensor = numbers)",
"_____no_output_____"
]
],
[
[
"### Mean across columns",
"_____no_output_____"
]
],
[
[
"# Reduce rows -> Find mean across columns\n#(6. + 3.)/2, (9. + 5.)/2\nprint(tf.reduce_mean(input_tensor = numbers, axis = 0))",
"tf.Tensor([4.5 7. ], shape=(2,), dtype=float32)\n"
],
[
"# (6. + 3.)/2, (9. + 5.)/2\nprint(tf.reduce_mean(input_tensor = numbers, axis = 0, keepdims = True))",
"tf.Tensor([[4.5 7. ]], shape=(1, 2), dtype=float32)\n"
]
],
[
[
"### Mean across rows",
"_____no_output_____"
]
],
[
[
"# Reduce columns -> Find mean across rows\n#(6. + 9.)/2, (3. + 5.)/2\nprint(tf.reduce_mean(input_tensor = numbers, axis = 1))",
"tf.Tensor([7.5 4. ], shape=(2,), dtype=float32)\n"
],
[
"# (6. + 9.)/2, (3. + 5.)/2\nprint(tf.reduce_mean(input_tensor = numbers, axis = 1, keepdims = True))",
"tf.Tensor(\n[[7.5]\n [4. ]], shape=(2, 1), dtype=float32)\n"
]
],
[
[
"### Generating normal distribution in a tensor",
"_____no_output_____"
]
],
[
[
"print(tf.random.normal(shape = (3, 2), mean = 10, stddev = 2, dtype = tf.float32, seed = None, name = None))",
"tf.Tensor(\n[[11.67767 10.190011]\n [ 4.638527 10.937168]\n [11.941066 8.749236]], shape=(3, 2), dtype=float32)\n"
]
],
[
[
"### Generating uniform distribution in a tensor",
"_____no_output_____"
]
],
[
[
"tf.random.uniform(shape = (3, 2), minval = 0, maxval = 1, dtype = tf.float32, seed = None, name = None)",
"_____no_output_____"
]
],
[
[
"### Random Seed in Tensorflow",
"_____no_output_____"
]
],
[
[
"print('Random Seed - 11\\n')\ntf.random.set_seed(11)\nrandom_1 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)\nrandom_2 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)\nprint(random_1) \nprint(random_2)\nprint('\\n')\n\nprint('Random Seed - 12\\n')\ntf.random.set_seed(12)\nrandom_1 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)\nrandom_2 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)\nprint(random_1) \nprint(random_2)\nprint('\\n')\n\nprint('Random Seed - 11\\n')\ntf.random.set_seed(11)\nrandom_1 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)\nrandom_2 = tf.random.uniform(shape = (2, 2), maxval = 7, dtype = tf.int32)\nprint(random_1) \nprint(random_2)",
"Random Seed - 11\n\ntf.Tensor(\n[[4 3]\n [0 2]], shape=(2, 2), dtype=int32)\ntf.Tensor(\n[[6 3]\n [6 3]], shape=(2, 2), dtype=int32)\n\n\nRandom Seed - 12\n\ntf.Tensor(\n[[0 4]\n [2 2]], shape=(2, 2), dtype=int32)\ntf.Tensor(\n[[0 4]\n [0 1]], shape=(2, 2), dtype=int32)\n\n\nRandom Seed - 11\n\ntf.Tensor(\n[[4 3]\n [0 2]], shape=(2, 2), dtype=int32)\ntf.Tensor(\n[[6 3]\n [6 3]], shape=(2, 2), dtype=int32)\n"
]
],
[
[
"### Max, Min and Indices",
"_____no_output_____"
]
],
[
[
"tensor_m = tf.constant([2, 20, 15, 32, 77, 29, -16, -51, 29])\nprint(tensor_m)\n\n# Max argument\nindex = tf.argmax(input = tensor_m)\nprint('Index of max: {}\\n'.format(index))\nprint('Max element: {}'.format(tensor_m[index].numpy()))",
"tf.Tensor([ 2 20 15 32 77 29 -16 -51 29], shape=(9,), dtype=int32)\nIndex of max: 4\n\nMax element: 77\n"
],
[
"print(tensor_m)\n\n# Min argument\nindex = tf.argmin(input = tensor_m)\nprint('Index of minumum element: {}\\n'.format(index))\nprint('Minimum element: {}'.format(tensor_m[index].numpy()))",
"tf.Tensor([ 2 20 15 32 77 29 -16 -51 29], shape=(9,), dtype=int32)\nIndex of minumum element: 7\n\nMinimum element: -51\n"
]
],
[
[
"# TensorFlow v2 : Advanced",
"_____no_output_____"
],
[
"### Computing gradients with GradientTape - Automatic Differentiation\n\nTensorFlow v2 has this API for recording gradient values based on the values computed in the forward pass with respect to inputs. Since we need values to be remembered during the forward pass, the tf.GradientTape provides us a way to automatically differentiate a certain function wrt to the input variable specified. To read more on Auto Diiferentiation in TensorFlow v2 click [here]https://www.tensorflow.org/guide/autodiff).",
"_____no_output_____"
]
],
[
[
"x = tf.random.normal(shape=(2, 2))\ny = tf.random.normal(shape=(2, 2))\n\nwith tf.GradientTape() as tape:\n \n # Start recording the history of operations applied to x\n tape.watch(x)\n \n # Do some math using x and y\n z = tf.sqrt(tf.square(x) + tf.square(y)) \n \n # What's the gradient of z with respect to x\n dz = tape.gradient(z, x)\n print(dz)",
"tf.Tensor(\n[[0.57269156 0.9751479 ]\n [0.53572375 0.8537005 ]], shape=(2, 2), dtype=float32)\n"
]
],
[
[
"tf.GradientTape API automatically watches the function to be differentiated, no need to explicitly mention/run tape.watch()",
"_____no_output_____"
]
],
[
[
"x = tf.Variable(x)\n\nwith tf.GradientTape() as tape:\n \n # Doing some calculations using x and y\n z = tf.sqrt(tf.square(x) + tf.square(y))\n \n # Getting the gradient of z wrt x\n dz = tape.gradient(z, x)\n print(dz)",
"tf.Tensor(\n[[0.57269156 0.9751479 ]\n [0.53572375 0.8537005 ]], shape=(2, 2), dtype=float32)\n"
]
],
[
[
"We can perform differentiation in chains also, using two tapes!",
"_____no_output_____"
]
],
[
[
"with tf.GradientTape() as outer_tape:\n \n with tf.GradientTape() as tape:\n \n # Computation using x and y\n z = tf.sqrt(tf.square(x) + tf.square(y))\n \n # First differentiation of z wrt x\n dz = tape.gradient(z, x)\n \n # Second differentiation of z wrt x \n dz2 = outer_tape.gradient(dz, x)\n print(dz2)",
"tf.Tensor(\n[[0.47573683 0.04185158]\n [1.0060201 0.24222922]], shape=(2, 2), dtype=float32)\n"
]
],
[
[
"### Tensorflow v2 Graph Function\n\nRead [here](https://www.tensorflow.org/guide/intro_to_graphs) for more information on Computation Graphs and TensorFlow Functions of TensorFlow v1",
"_____no_output_____"
]
],
[
[
"#Normal Python function\ndef f1(x, y):\n return tf.reduce_mean(input_tensor=tf.multiply(x ** 2, 5) + y**2)\n\n#Converting that into Tensorflow Graph function\nf2 = tf.function(f1)\n\nx = tf.constant([7., -2.])\ny = tf.constant([8., 6.])\n\n#Funtion 1 and function 2 return the same value, but function 2 executes as a TensorFlow graph\nassert f1(x,y).numpy() == f2(x,y).numpy()\n\nans = f1(x,y)\nprint(ans)\n\nans = f2(x,y)\nprint(ans)",
"tf.Tensor(182.5, shape=(), dtype=float32)\ntf.Tensor(182.5, shape=(), dtype=float32)\n"
]
],
[
[
"# TensorFlow v2 : Linear Regression and tf.function",
"_____no_output_____"
],
[
"### Let's see what is the importance of tf.function with a small example of Linear Regression",
"_____no_output_____"
]
],
[
[
"input_dim = 2\noutput_dim = 1\nlearning_rate = 0.01\n\n# This is our weight matrix\nw = tf.Variable(tf.random.uniform(shape=(input_dim, output_dim)))\n\n# This is our bias vector\nb = tf.Variable(tf.zeros(shape=(output_dim,)))\n\ndef compute_predictions(features):\n return tf.matmul(features, w) + b\n\ndef compute_loss(labels, predictions):\n return tf.reduce_mean(tf.square(labels - predictions))\n\ndef train_on_batch(x, y):\n with tf.GradientTape() as tape:\n \n predictions = compute_predictions(x)\n loss = compute_loss(y, predictions)\n \n # Note that `tape.gradient` works with a list as well (w, b).\n dloss_dw, dloss_db = tape.gradient(loss, [w, b])\n \n w.assign_sub(learning_rate * dloss_dw)\n b.assign_sub(learning_rate * dloss_db)\n \n return loss",
"_____no_output_____"
],
[
"import numpy as np\nimport random\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Prepare a dataset.\nnum_samples = 10000\nnegative_samples = np.random.multivariate_normal(mean=[0, 3], cov=[[1, 0.5],[0.5, 1]], size=num_samples)\npositive_samples = np.random.multivariate_normal(mean=[3, 0], cov=[[1, 0.5],[0.5, 1]], size=num_samples)\nfeatures = np.vstack((negative_samples, positive_samples)).astype(np.float32)\nlabels = np.vstack((np.zeros((num_samples, 1), dtype='float32'), np.ones((num_samples, 1), dtype='float32')))\n\nplt.scatter(features[:, 0], features[:, 1], c=labels[:, 0])",
"_____no_output_____"
],
[
"# Shuffle the data.\nindices = np.random.permutation(len(features))\nfeatures = features[indices]\nlabels = labels[indices]\n\n# Create a tf.data.Dataset object for easy batched iteration\ndataset = tf.data.Dataset.from_tensor_slices((features, labels))\ndataset = dataset.shuffle(buffer_size=1024).batch(256)\n\nfor epoch in range(10):\n for step, (x, y) in enumerate(dataset):\n loss = train_on_batch(x, y)\n print('Epoch %d: last batch loss = %.4f' % (epoch, float(loss)))",
"Epoch 0: last batch loss = 0.0806\nEpoch 1: last batch loss = 0.0757\nEpoch 2: last batch loss = 0.0427\nEpoch 3: last batch loss = 0.0150\nEpoch 4: last batch loss = 0.0360\nEpoch 5: last batch loss = 0.0294\nEpoch 6: last batch loss = 0.0151\nEpoch 7: last batch loss = 0.0229\nEpoch 8: last batch loss = 0.0184\nEpoch 9: last batch loss = 0.0294\n"
],
[
"predictions = compute_predictions(features)\nplt.scatter(features[:, 0], features[:, 1], c=predictions[:, 0] > 0.5)",
"_____no_output_____"
]
],
[
[
"### Analysizing the code run time",
"_____no_output_____"
],
[
"TensorFlow v2 with Eager Execution",
"_____no_output_____"
]
],
[
[
"import time\n\nt0 = time.time()\nfor epoch in range(20):\n for step, (x, y) in enumerate(dataset):\n loss = train_on_batch(x, y)\nt_end = time.time() - t0\nprint('Time per epoch: %.3f s' % (t_end / 20,))",
"Time per epoch: 0.400 s\n"
]
],
[
[
"Adding the @tf.function to convert the function into a static graph (TensorFlow v1)",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef train_on_batch_tf(x, y):\n with tf.GradientTape() as tape:\n predictions = compute_predictions(x)\n loss = compute_loss(y, predictions)\n dloss_dw, dloss_db = tape.gradient(loss, [w, b])\n w.assign_sub(learning_rate * dloss_dw)\n b.assign_sub(learning_rate * dloss_db)\n return loss",
"_____no_output_____"
]
],
[
[
"Running using the Static Graph method ",
"_____no_output_____"
]
],
[
[
"t0 = time.time()\nfor epoch in range(20):\n for step, (x, y) in enumerate(dataset):\n loss = train_on_batch_tf(x, y)\nt_end = time.time() - t0\nprint('Time per epoch: %.3f s' % (t_end / 20,))",
"Time per epoch: 0.259 s\n"
]
],
[
[
"## There is a huge decrease in the time taken per epoch!!!\n\n## Eager execution is great for debugging and printing results line-by-line, but when it's time to scale, static graphs are a researcher's best friends.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5a649d983988f47b80cecde12cc8138f1ae8e9 | 13,472 | ipynb | Jupyter Notebook | 01_getting_started/overview.ipynb | olippuner/Pandas_Docu_as_Notebooks | 2a496d0684ba9d6502dd065b9125719f6b9fc3e7 | [
"BSD-3-Clause"
] | null | null | null | 01_getting_started/overview.ipynb | olippuner/Pandas_Docu_as_Notebooks | 2a496d0684ba9d6502dd065b9125719f6b9fc3e7 | [
"BSD-3-Clause"
] | null | null | null | 01_getting_started/overview.ipynb | olippuner/Pandas_Docu_as_Notebooks | 2a496d0684ba9d6502dd065b9125719f6b9fc3e7 | [
"BSD-3-Clause"
] | null | null | null | 41.838509 | 248 | 0.65239 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# Package overview\n\npandas is a [Python](https://www.python.org) package providing fast,\nflexible, and expressive data structures designed to make working with\n“relational” or “labeled” data both easy and intuitive. It aims to be the\nfundamental high-level building block for doing practical, **real-world** data\nanalysis in Python. Additionally, it has the broader goal of becoming **the\nmost powerful and flexible open source data analysis/manipulation tool\navailable in any language**. It is already well on its way toward this goal.\n\npandas is well suited for many different kinds of data:\n\n> - Tabular data with heterogeneously-typed columns, as in an SQL table or\n Excel spreadsheet \n- Ordered and unordered (not necessarily fixed-frequency) time series data. \n- Arbitrary matrix data (homogeneously typed or heterogeneous) with row and\n column labels \n- Any other form of observational / statistical data sets. The data\n need not be labeled at all to be placed into a pandas data structure \n\n\n\nThe two primary data structures of pandas, `Series` (1-dimensional)\nand `DataFrame` (2-dimensional), handle the vast majority of typical use\ncases in finance, statistics, social science, and many areas of\nengineering. For R users, `DataFrame` provides everything that R’s\n`data.frame` provides and much more. pandas is built on top of [NumPy](https://www.numpy.org) and is intended to integrate well within a scientific\ncomputing environment with many other 3rd party libraries.\n\nHere are just a few of the things that pandas does well:\n\n> - Easy handling of **missing data** (represented as NaN) in floating point as\n well as non-floating point data \n- Size mutability: columns can be **inserted and deleted** from DataFrame and\n higher dimensional objects \n- Automatic and explicit **data alignment**: objects can be explicitly\n aligned to a set of labels, or the user can simply ignore the labels and\n let `Series`, `DataFrame`, etc. automatically align the data for you in\n computations \n- Powerful, flexible **group by** functionality to perform\n split-apply-combine operations on data sets, for both aggregating and\n transforming data \n- Make it **easy to convert** ragged, differently-indexed data in other\n Python and NumPy data structures into DataFrame objects \n- Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**\n of large data sets \n- Intuitive **merging** and **joining** data sets \n- Flexible **reshaping** and pivoting of data sets \n- **Hierarchical** labeling of axes (possible to have multiple labels per\n tick) \n- Robust IO tools for loading data from **flat files** (CSV and delimited),\n Excel files, databases, and saving / loading data from the ultrafast **HDF5\n format** \n- **Time series**-specific functionality: date range generation and frequency\n conversion, moving window statistics, date shifting, and lagging. \n\n\n\nMany of these principles are here to address the shortcomings frequently\nexperienced using other languages / scientific research environments. For data\nscientists, working with data is typically divided into multiple stages:\nmunging and cleaning data, analyzing / modeling it, then organizing the results\nof the analysis into a form suitable for plotting or tabular display. pandas\nis the ideal tool for all of these tasks.\n\nSome other notes\n\n> - pandas is **fast**. Many of the low-level algorithmic bits have been\n extensively tweaked in [Cython](https://cython.org) code. However, as with\n anything else generalization usually sacrifices performance. So if you focus\n on one feature for your application you may be able to create a faster\n specialized tool. \n- pandas is a dependency of [statsmodels](https://www.statsmodels.org/stable/index.html), making it an important part of the\n statistical computing ecosystem in Python. \n- pandas has been used extensively in production in financial applications. ",
"_____no_output_____"
],
[
"## Data structures\n\n|Dimensions|Name|Description|\n|:-------------:|:------------------:|:------------------------------------------------:|\n|1|Series|1D labeled homogeneously-typed array|\n|2|DataFrame|General 2D labeled, size-mutable tabular structure with potentially heterogeneously-typed column|",
"_____no_output_____"
],
[
"### Why more than one data structure?\n\nThe best way to think about the pandas data structures is as flexible\ncontainers for lower dimensional data. For example, DataFrame is a container\nfor Series, and Series is a container for scalars. We would like to be\nable to insert and remove objects from these containers in a dictionary-like\nfashion.\n\nAlso, we would like sensible default behaviors for the common API functions\nwhich take into account the typical orientation of time series and\ncross-sectional data sets. When using the N-dimensional array (ndarrays) to store 2- and 3-dimensional\ndata, a burden is placed on the user to consider the orientation of the data\nset when writing functions; axes are considered more or less equivalent (except\nwhen C- or Fortran-contiguousness matters for performance). In pandas, the axes\nare intended to lend more semantic meaning to the data; i.e., for a particular\ndata set, there is likely to be a “right” way to orient the data. The goal,\nthen, is to reduce the amount of mental effort required to code up data\ntransformations in downstream functions.\n\nFor example, with tabular data (DataFrame) it is more semantically helpful to\nthink of the **index** (the rows) and the **columns** rather than axis 0 and\naxis 1. Iterating through the columns of the DataFrame thus results in more\nreadable code:",
"_____no_output_____"
],
[
"\"\"\"\nfor col in df.columns:\n series = df[col]\n # do something with series\n\"\"\" ",
"_____no_output_____"
],
[
"## Mutability and copying of data\n\nAll pandas data structures are value-mutable (the values they contain can be\naltered) but not always size-mutable. The length of a Series cannot be\nchanged, but, for example, columns can be inserted into a DataFrame. However,\nthe vast majority of methods produce new objects and leave the input data\nuntouched. In general we like to **favor immutability** where sensible.",
"_____no_output_____"
],
[
"## Getting support\n\nThe first stop for pandas issues and ideas is the [Github Issue Tracker](https://github.com/pandas-dev/pandas/issues). If you have a general question,\npandas community experts can answer through [Stack Overflow](https://stackoverflow.com/questions/tagged/pandas).",
"_____no_output_____"
],
[
"## Community\n\npandas is actively supported today by a community of like-minded individuals around\nthe world who contribute their valuable time and energy to help make open source\npandas possible. Thanks to [all of our contributors](https://github.com/pandas-dev/pandas/graphs/contributors).\n\nIf you’re interested in contributing, please visit the contributing guide.\n\npandas is a [NumFOCUS](https://numfocus.org/sponsored-projects) sponsored project.\nThis will help ensure the success of the development of pandas as a world-class open-source\nproject and makes it possible to [donate](https://pandas.pydata.org/donate.html) to the project.",
"_____no_output_____"
],
[
"## Project governance\n\nThe governance process that pandas project has used informally since its inception in 2008 is formalized in [Project Governance documents](https://github.com/pandas-dev/pandas-governance).\nThe documents clarify how decisions are made and how the various elements of our community interact, including the relationship between open source collaborative development and work that may be funded by for-profit or non-profit entities.\n\nWes McKinney is the Benevolent Dictator for Life (BDFL).",
"_____no_output_____"
],
[
"## Development team\n\nThe list of the Core Team members and more detailed information can be found on the [people’s page](https://github.com/pandas-dev/pandas-governance/blob/master/people.md) of the governance repo.",
"_____no_output_____"
],
[
"## Institutional partners\n\nThe information about current institutional partners can be found on [pandas website page](https://pandas.pydata.org/about.html).",
"_____no_output_____"
],
[
"## License",
"_____no_output_____"
],
[
"BSD 3-Clause License\n\nCopyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team\nAll rights reserved.\n\nCopyright (c) 2011-2021, Open source contributors.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb5a6da1d43eb566279d40a1be6bb825efc6c414 | 159,141 | ipynb | Jupyter Notebook | deepl/Regularization+-+v2.ipynb | stepinski/machinelearning | 1f84883a25616da4cd76bb4655267efd3421e561 | [
"MIT"
] | null | null | null | deepl/Regularization+-+v2.ipynb | stepinski/machinelearning | 1f84883a25616da4cd76bb4655267efd3421e561 | [
"MIT"
] | null | null | null | deepl/Regularization+-+v2.ipynb | stepinski/machinelearning | 1f84883a25616da4cd76bb4655267efd3421e561 | [
"MIT"
] | null | null | null | 156.020588 | 56,104 | 0.849856 | [
[
[
"# Regularization\n\nWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!\n\n**You will learn to:** Use regularization in your deep learning models.\n\nLet's first import the packages you are going to use.",
"_____no_output_____"
]
],
[
[
"# import packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec\nfrom reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters\nimport sklearn\nimport sklearn.datasets\nimport scipy.io\nfrom testCases import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'",
"_____no_output_____"
]
],
[
[
"**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. \n\n<img src=\"images/field_kiank.png\" style=\"width:600px;height:350px;\">\n<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>\n\n\nThey give you the following 2D dataset from France's past 10 games.",
"_____no_output_____"
]
],
[
[
"train_X, train_Y, test_X, test_Y = load_2D_dataset()",
"_____no_output_____"
]
],
[
[
"Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.\n- If the dot is blue, it means the French player managed to hit the ball with his/her head\n- If the dot is red, it means the other team's player hit the ball with their head\n\n**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.",
"_____no_output_____"
],
[
"**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. \n\nYou will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. ",
"_____no_output_____"
],
[
"## 1 - Non-regularized model\n\nYou will use the following neural network (already implemented for you below). This model can be used:\n- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use \"`lambd`\" instead of \"`lambda`\" because \"`lambda`\" is a reserved keyword in Python. \n- in *dropout mode* -- by setting the `keep_prob` to a value less than one\n\nYou will first try the model without any regularization. Then, you will implement:\n- *L2 regularization* -- functions: \"`compute_cost_with_regularization()`\" and \"`backward_propagation_with_regularization()`\"\n- *Dropout* -- functions: \"`forward_propagation_with_dropout()`\" and \"`backward_propagation_with_dropout()`\"\n\nIn each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.",
"_____no_output_____"
]
],
[
[
"def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):\n \"\"\"\n Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)\n learning_rate -- learning rate of the optimization\n num_iterations -- number of iterations of the optimization loop\n print_cost -- If True, print the cost every 10000 iterations\n lambd -- regularization hyperparameter, scalar\n keep_prob - probability of keeping a neuron active during drop-out, scalar.\n \n Returns:\n parameters -- parameters learned by the model. They can then be used to predict.\n \"\"\"\n \n grads = {}\n costs = [] # to keep track of the cost\n m = X.shape[1] # number of examples\n layers_dims = [X.shape[0], 20, 3, 1]\n \n # Initialize parameters dictionary.\n parameters = initialize_parameters(layers_dims)\n\n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.\n if keep_prob == 1:\n a3, cache = forward_propagation(X, parameters)\n elif keep_prob < 1:\n a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)\n \n # Cost function\n if lambd == 0:\n cost = compute_cost(a3, Y)\n else:\n cost = compute_cost_with_regularization(a3, Y, parameters, lambd)\n \n # Backward propagation.\n assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout, \n # but this assignment will only explore one at a time\n if lambd == 0 and keep_prob == 1:\n grads = backward_propagation(X, Y, cache)\n elif lambd != 0:\n grads = backward_propagation_with_regularization(X, Y, cache, lambd)\n elif keep_prob < 1:\n grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)\n \n # Update parameters.\n parameters = update_parameters(parameters, grads, learning_rate)\n \n # Print the loss every 10000 iterations\n if print_cost and i % 10000 == 0:\n print(\"Cost after iteration {}: {}\".format(i, cost))\n if print_cost and i % 1000 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('iterations (x1,000)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters",
"_____no_output_____"
]
],
[
[
"Let's train the model without any regularization, and observe the accuracy on the train/test sets.",
"_____no_output_____"
]
],
[
[
"parameters = model(train_X, train_Y)\nprint (\"On the training set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"Cost after iteration 0: 0.6557412523481002\nCost after iteration 10000: 0.16329987525724213\nCost after iteration 20000: 0.13851642423253263\n"
]
],
[
[
"The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.",
"_____no_output_____"
]
],
[
[
"plt.title(\"Model without regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"_____no_output_____"
]
],
[
[
"The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.",
"_____no_output_____"
],
[
"## 2 - L2 Regularization\n\nThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:\n$$J = -\\frac{1}{m} \\sum\\limits_{i = 1}^{m} \\large{(}\\small y^{(i)}\\log\\left(a^{[L](i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right) \\large{)} \\tag{1}$$\nTo:\n$$J_{regularized} = \\small \\underbrace{-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} \\large{(}\\small y^{(i)}\\log\\left(a^{[L](i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right) \\large{)} }_\\text{cross-entropy cost} + \\underbrace{\\frac{1}{m} \\frac{\\lambda}{2} \\sum\\limits_l\\sum\\limits_k\\sum\\limits_j W_{k,j}^{[l]2} }_\\text{L2 regularization cost} \\tag{2}$$\n\nLet's modify your cost and observe the consequences.\n\n**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\\sum\\limits_k\\sum\\limits_j W_{k,j}^{[l]2}$ , use :\n```python\nnp.sum(np.square(Wl))\n```\nNote that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \\frac{1}{m} \\frac{\\lambda}{2} $.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: compute_cost_with_regularization\n\ndef compute_cost_with_regularization(A3, Y, parameters, lambd):\n \"\"\"\n Implement the cost function with L2 regularization. See formula (2) above.\n \n Arguments:\n A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n parameters -- python dictionary containing parameters of the model\n \n Returns:\n cost - value of the regularized loss function (formula (2))\n \"\"\"\n m = Y.shape[1]\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n W3 = parameters[\"W3\"]\n \n cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost\n \n ### START CODE HERE ### (approx. 1 line)\n L2_regularization_cost = (np.sum(np.square(W1))+ np.sum(np.square(W2))+ np.sum(np.square(W3)))*1/m*lambd/2\n ### END CODER HERE ###\n \n cost = cross_entropy_cost + L2_regularization_cost\n \n return cost",
"_____no_output_____"
],
[
"A3, Y_assess, parameters = compute_cost_with_regularization_test_case()\n\nprint(\"cost = \" + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))",
"cost = 1.78648594516\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr>\n <td>\n **cost**\n </td>\n <td>\n 1.78648594516\n </td>\n \n </tr>\n\n</table> ",
"_____no_output_____"
],
[
"Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. \n\n**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\\frac{d}{dW} ( \\frac{1}{2}\\frac{\\lambda}{m} W^2) = \\frac{\\lambda}{m} W$).",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: backward_propagation_with_regularization\n\ndef backward_propagation_with_regularization(X, Y, cache, lambd):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added an L2 regularization.\n \n Arguments:\n X -- input dataset, of shape (input size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation()\n lambd -- regularization hyperparameter, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n \n ### START CODE HERE ### (approx. 1 line)\n dW3 = 1./m * np.dot(dZ3, A2.T) + None\n ### END CODE HERE ###\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW2 = 1./m * np.dot(dZ2, A1.T) + None\n ### END CODE HERE ###\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW1 = 1./m * np.dot(dZ1, X.T) + None\n ### END CODE HERE ###\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients",
"_____no_output_____"
],
[
"X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()\n\ngrads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"dW3 = \"+ str(grads[\"dW3\"]))",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table> \n <tr>\n <td>\n **dW1**\n </td>\n <td>\n [[-0.25604646 0.12298827 -0.28297129]\n [-0.17706303 0.34536094 -0.4410571 ]]\n </td>\n </tr>\n <tr>\n <td>\n **dW2**\n </td>\n <td>\n [[ 0.79276486 0.85133918]\n [-0.0957219 -0.01720463]\n [-0.13100772 -0.03750433]]\n </td>\n </tr>\n <tr>\n <td>\n **dW3**\n </td>\n <td>\n [[-1.77691347 -0.11832879 -0.09397446]]\n </td>\n </tr>\n</table> ",
"_____no_output_____"
],
[
"Let's now run the model with L2 regularization $(\\lambda = 0.7)$. The `model()` function will call: \n- `compute_cost_with_regularization` instead of `compute_cost`\n- `backward_propagation_with_regularization` instead of `backward_propagation`",
"_____no_output_____"
]
],
[
[
"parameters = model(train_X, train_Y, lambd = 0.7)\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"_____no_output_____"
]
],
[
[
"Congrats, the test set accuracy increased to 93%. You have saved the French football team!\n\nYou are not overfitting the training data anymore. Let's plot the decision boundary.",
"_____no_output_____"
]
],
[
[
"plt.title(\"Model with L2-regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"_____no_output_____"
]
],
[
[
"**Observations**:\n- The value of $\\lambda$ is a hyperparameter that you can tune using a dev set.\n- L2 regularization makes your decision boundary smoother. If $\\lambda$ is too large, it is also possible to \"oversmooth\", resulting in a model with high bias.\n\n**What is L2-regularization actually doing?**:\n\nL2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. \n\n<font color='blue'>\n**What you should remember** -- the implications of L2-regularization on:\n- The cost computation:\n - A regularization term is added to the cost\n- The backpropagation function:\n - There are extra terms in the gradients with respect to weight matrices\n- Weights end up smaller (\"weight decay\"): \n - Weights are pushed to smaller values.",
"_____no_output_____"
],
[
"## 3 - Dropout\n\nFinally, **dropout** is a widely used regularization technique that is specific to deep learning. \n**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!\n\n<!--\nTo understand drop-out, consider this conversation with a friend:\n- Friend: \"Why do you need all these neurons to train your network and classify images?\". \n- You: \"Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!\"\n- Friend: \"I see, but are you sure that your neurons are learning different features and not all the same features?\"\n- You: \"Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution.\"\n!--> \n\n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/dropout1_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<br>\n<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\\_prob$ or keep it with probability $keep\\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>\n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/dropout2_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n\n<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>\n\n\nWhen you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. \n\n### 3.1 - Forward propagation with dropout\n\n**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. \n\n**Instructions**:\nYou would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:\n1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.\n2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X > 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.\n3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.\n4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: forward_propagation_with_dropout\n\ndef forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):\n \"\"\"\n Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (20, 2)\n b1 -- bias vector of shape (20, 1)\n W2 -- weight matrix of shape (3, 20)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n A3 -- last activation value, output of the forward propagation, of shape (1,1)\n cache -- tuple, information stored for computing the backward propagation\n \"\"\"\n \n np.random.seed(1)\n \n # retrieve parameters\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n \n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n ### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. \n D1 = None # Step 1: initialize matrix D1 = np.random.rand(..., ...)\n D1 = None # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)\n A1 = None # Step 3: shut down some neurons of A1\n A1 = None # Step 4: scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n ### START CODE HERE ### (approx. 4 lines)\n D2 = None # Step 1: initialize matrix D2 = np.random.rand(..., ...)\n D2 = None # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)\n A2 = None # Step 3: shut down some neurons of A2\n A2 = None # Step 4: scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n \n cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)\n \n return A3, cache",
"_____no_output_____"
],
[
"X_assess, parameters = forward_propagation_with_dropout_test_case()\n\nA3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)\nprint (\"A3 = \" + str(A3))",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr>\n <td>\n **A3**\n </td>\n <td>\n [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]\n </td>\n \n </tr>\n\n</table> ",
"_____no_output_____"
],
[
"### 3.2 - Backward propagation with dropout\n\n**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. \n\n**Instruction**:\nBackpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:\n1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. \n2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: backward_propagation_with_dropout\n\ndef backward_propagation_with_dropout(X, Y, cache, keep_prob):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added dropout.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation_with_dropout()\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1./m * np.dot(dZ3, A2.T)\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n dA2 = np.dot(W3.T, dZ3)\n ### START CODE HERE ### (≈ 2 lines of code)\n dA2 = None # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation\n dA2 = None # Step 2: Scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1./m * np.dot(dZ2, A1.T)\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n ### START CODE HERE ### (≈ 2 lines of code)\n dA1 = None # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation\n dA1 = None # Step 2: Scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1./m * np.dot(dZ1, X.T)\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients",
"_____no_output_____"
],
[
"X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()\n\ngradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)\n\nprint (\"dA1 = \" + str(gradients[\"dA1\"]))\nprint (\"dA2 = \" + str(gradients[\"dA2\"]))",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr>\n <td>\n **dA1**\n </td>\n <td>\n [[ 0.36544439 0. -0.00188233 0. -0.17408748]\n [ 0.65515713 0. -0.00337459 0. -0. ]]\n </td>\n \n </tr>\n <tr>\n <td>\n **dA2**\n </td>\n <td>\n [[ 0.58180856 0. -0.00299679 0. -0.27715731]\n [ 0. 0.53159854 -0. 0.53159854 -0.34089673]\n [ 0. 0. -0.00292733 0. -0. ]]\n </td>\n \n </tr>\n</table> ",
"_____no_output_____"
],
[
"Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:\n- `forward_propagation_with_dropout` instead of `forward_propagation`.\n- `backward_propagation_with_dropout` instead of `backward_propagation`.",
"_____no_output_____"
]
],
[
[
"parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)\n\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"_____no_output_____"
]
],
[
[
"Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! \n\nRun the code below to plot the decision boundary.",
"_____no_output_____"
]
],
[
[
"plt.title(\"Model with dropout\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"_____no_output_____"
]
],
[
[
"**Note**:\n- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training. \n- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.\n\n<font color='blue'>\n**What you should remember about dropout:**\n- Dropout is a regularization technique.\n- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.\n- Apply dropout both during forward and backward propagation.\n- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5. ",
"_____no_output_____"
],
[
"## 4 - Conclusions",
"_____no_output_____"
],
[
"**Here are the results of our three models**: \n\n<table> \n <tr>\n <td>\n **model**\n </td>\n <td>\n **train accuracy**\n </td>\n <td>\n **test accuracy**\n </td>\n\n </tr>\n <td>\n 3-layer NN without regularization\n </td>\n <td>\n 95%\n </td>\n <td>\n 91.5%\n </td>\n <tr>\n <td>\n 3-layer NN with L2-regularization\n </td>\n <td>\n 94%\n </td>\n <td>\n 93%\n </td>\n </tr>\n <tr>\n <td>\n 3-layer NN with dropout\n </td>\n <td>\n 93%\n </td>\n <td>\n 95%\n </td>\n </tr>\n</table> ",
"_____no_output_____"
],
[
"Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system. ",
"_____no_output_____"
],
[
"Congratulations for finishing this assignment! And also for revolutionizing French football. :-) ",
"_____no_output_____"
],
[
"<font color='blue'>\n**What we want you to remember from this notebook**:\n- Regularization will help you reduce overfitting.\n- Regularization will drive your weights to lower values.\n- L2 regularization and Dropout are two very effective regularization techniques.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb5a6f3bb3910bce54469538e33445f3d6ba7d8b | 5,989 | ipynb | Jupyter Notebook | experiment/test.ipynb | cattidea/anime-avatar-generator | 48c1c48c41bf686943f750bf2cc102b2f13c4a5d | [
"MIT"
] | 1 | 2020-09-02T11:55:39.000Z | 2020-09-02T11:55:39.000Z | experiment/test.ipynb | cattidea/anime-avatar-generator | 48c1c48c41bf686943f750bf2cc102b2f13c4a5d | [
"MIT"
] | 1 | 2021-11-18T08:44:56.000Z | 2021-11-18T08:44:56.000Z | experiment/test.ipynb | cattidea/anime-avatar-generator | 48c1c48c41bf686943f750bf2cc102b2f13c4a5d | [
"MIT"
] | 1 | 2021-08-01T15:30:40.000Z | 2021-08-01T15:30:40.000Z | 26.736607 | 159 | 0.546001 | [
[
[
"%reload_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import sys\nimport os\nBASE_DIR = os.path.abspath(os.path.join(os.path.dirname(\"__file__\"), os.path.pardir))\nsys.path.append(BASE_DIR)",
"_____no_output_____"
],
[
"import cv2\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport imgaug as ia\nimport imgaug.augmenters as iaa\nimport tensorflow as tf",
"_____no_output_____"
],
[
"from data_processor.data_loader import DataLoader, show_batch, DataLoaderWithoutCache\nfrom models.dcgan import DCGAN, gen_random",
"_____no_output_____"
],
[
"os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1\"\ngpus = tf.config.experimental.list_physical_devices(device_type='GPU')\ncpus = tf.config.experimental.list_physical_devices(device_type='CPU') \ntf.config.experimental.set_virtual_device_configuration(\n gpus[0],\n [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024*7.5)])",
"_____no_output_____"
],
[
"batch_size = 256\ncache_size = 1024 * 64\nnz = 100\nglr = 2e-4\ndlr = 2e-4\nimg_dir = 'data/faces/'\nIMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS = 64, 64, 3\n\ndef scale(img):\n return (img - 127.5) / 127.5\n\ndef rescale(img):\n return img * 127.5 + 127.5",
"_____no_output_____"
],
[
"sometimes = lambda aug: iaa.Sometimes(0.5, aug)\n\naug = iaa.Sequential(\n [\n iaa.Fliplr(0.5), # horizontally flip 50% of all images\n sometimes(iaa.CropAndPad(\n percent=(-0.05, 0.1),\n pad_mode=ia.ALL,\n pad_cval=(0, 255)\n )),\n sometimes(iaa.Affine(\n scale={\"x\": (0.9, 1.1), \"y\": (0.9, 1.1)}, # scale images to 80-120% of their size, individually per axis\n translate_percent={\"x\": (-0.1, 0.1), \"y\": (-0.1, 0.1)}, # translate by -20 to +20 percent (per axis)\n rotate=(-10, 10), # rotate by -45 to +45 degrees\n order=[0, 1], # use nearest neighbour or bilinear interpolation (fast)\n cval=(0, 255), # if mode is constant, use a cval between 0 and 255\n mode=ia.ALL # use any of scikit-image's warping modes (see 2nd image from the top for examples)\n )),\n ],\n random_order=True\n)",
"_____no_output_____"
],
[
"data_loader = DataLoaderWithoutCache(data_dir=os.path.join(BASE_DIR, img_dir), img_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), cache_size=cache_size)\ndata_loader.scale(scale)\\\n .batch(batch_size)\\\n .augment(lambda x: aug(images=x))\nimg_batch = rescale(next(iter(data_loader)))\nshow_batch(img_batch)",
"_____no_output_____"
],
[
"num_examples_to_generate = 36\n\nseed = gen_random((num_examples_to_generate, nz))",
"_____no_output_____"
],
[
"def show_generator(generator, seed):\n predictions = generator(seed, training=False).numpy()\n images = rescale(predictions).astype(np.uint8)\n show_batch(images)",
"_____no_output_____"
],
[
"dcgan = DCGAN(image_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dlr=dlr, glr=glr, nz=nz)\ndcgan.summary()",
"_____no_output_____"
],
[
"show_generator(dcgan.generator, seed)",
"_____no_output_____"
],
[
"for epoch in range(500):\n for batch_idx, img_batch in enumerate(data_loader):\n dcgan.train_step(img_batch, num_iter_disc=1, num_iter_gen=1)\n print(f'epoch: {epoch}, batch: {batch_idx} ', end='\\r')\n show_generator(dcgan.generator, seed)",
"_____no_output_____"
],
[
"img_batch = rescale(next(iter(data_loader)))\nshow_batch(img_batch)",
"_____no_output_____"
],
[
"show_generator(dcgan.generator, seed)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a71139c0313f541036be714b0512b04fd5c9e | 69,755 | ipynb | Jupyter Notebook | Codelabs/1.Hello_ML_World.ipynb | MidasXIV/coursera-Tensorflow | dc38899f5481c4b928fab1e7908de6c811fc481c | [
"MIT"
] | null | null | null | Codelabs/1.Hello_ML_World.ipynb | MidasXIV/coursera-Tensorflow | dc38899f5481c4b928fab1e7908de6c811fc481c | [
"MIT"
] | 5 | 2019-05-15T05:16:26.000Z | 2019-05-16T21:50:39.000Z | Codelabs/1.Hello_ML_World.ipynb | MidasXIV/coursera-Tensorflow | dc38899f5481c4b928fab1e7908de6c811fc481c | [
"MIT"
] | null | null | null | 54.284047 | 435 | 0.347918 | [
[
[
"<a href=\"https://colab.research.google.com/github/MidasXIV/Artificial-Intelliegence--Deep-Learning--Tensor-Flow/blob/master/Codelabs/1.Hello_ML_World.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# The Hello World of Deep Learning with Neural Networks",
"_____no_output_____"
],
[
"Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. \n\nIn the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' -- \n\n\n```\nfloat my_function(float x){\n float y = (3 * x) + 1;\n return y;\n}\n```\n\nSo how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. \n\nThis is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece.\n",
"_____no_output_____"
],
[
"## Imports\n\nLet's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.\n\nWe then import a library called numpy, which helps us to represent our data as lists easily and quickly.\n\nThe framework for defining a neural network as a set of Sequential layers is called keras, so we import that too.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nfrom tensorflow import keras",
"_____no_output_____"
]
],
[
[
"## Define and Compile the Neural Network\n\nNext we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])",
"_____no_output_____"
]
],
[
[
"Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.\n\nIf you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here -- let's explain...\n\nWe know that in our function, the relationship between the numbers is y=3x+1. \n\nWhen the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.\n\nIt then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)\n\nIt will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)\n\nOver time you will learn the different and appropriate loss and optimizer functions for different scenarios. \n",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='sgd', loss='mean_squared_error')",
"_____no_output_____"
]
],
[
[
"## Providing the Data\n\nNext up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. \n\nA python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values asn an np.array[]",
"_____no_output_____"
]
],
[
[
"xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)\nys = np.array([-2.0, 1.0, 4.0, 7.0, 10.0, 13.0], dtype=float)",
"_____no_output_____"
]
],
[
[
"# Training the Neural Network",
"_____no_output_____"
],
[
"The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.",
"_____no_output_____"
]
],
[
[
"model.fit(xs, ys, epochs=500)",
"Epoch 1/500\n6/6 [==============================] - 0s 16ms/step - loss: 57.4875\nEpoch 2/500\n6/6 [==============================] - 0s 173us/step - loss: 45.2285\nEpoch 3/500\n6/6 [==============================] - 0s 171us/step - loss: 35.5836\nEpoch 4/500\n6/6 [==============================] - 0s 195us/step - loss: 27.9955\nEpoch 5/500\n6/6 [==============================] - 0s 157us/step - loss: 22.0256\nEpoch 6/500\n6/6 [==============================] - 0s 132us/step - loss: 17.3287\nEpoch 7/500\n6/6 [==============================] - 0s 134us/step - loss: 13.6334\nEpoch 8/500\n6/6 [==============================] - 0s 129us/step - loss: 10.7262\nEpoch 9/500\n6/6 [==============================] - 0s 154us/step - loss: 8.4389\nEpoch 10/500\n6/6 [==============================] - 0s 135us/step - loss: 6.6393\nEpoch 11/500\n6/6 [==============================] - 0s 115us/step - loss: 5.2235\nEpoch 12/500\n6/6 [==============================] - 0s 129us/step - loss: 4.1097\nEpoch 13/500\n6/6 [==============================] - 0s 152us/step - loss: 3.2333\nEpoch 14/500\n6/6 [==============================] - 0s 113us/step - loss: 2.5439\nEpoch 15/500\n6/6 [==============================] - 0s 124us/step - loss: 2.0014\nEpoch 16/500\n6/6 [==============================] - 0s 135us/step - loss: 1.5747\nEpoch 17/500\n6/6 [==============================] - 0s 137us/step - loss: 1.2389\nEpoch 18/500\n6/6 [==============================] - 0s 149us/step - loss: 0.9747\nEpoch 19/500\n6/6 [==============================] - 0s 142us/step - loss: 0.7669\nEpoch 20/500\n6/6 [==============================] - 0s 139us/step - loss: 0.6034\nEpoch 21/500\n6/6 [==============================] - 0s 139us/step - loss: 0.4748\nEpoch 22/500\n6/6 [==============================] - 0s 126us/step - loss: 0.3736\nEpoch 23/500\n6/6 [==============================] - 0s 118us/step - loss: 0.2939\nEpoch 24/500\n6/6 [==============================] - 0s 111us/step - loss: 0.2313\nEpoch 25/500\n6/6 [==============================] - 0s 142us/step - loss: 0.1820\nEpoch 26/500\n6/6 [==============================] - 0s 173us/step - loss: 0.1432\nEpoch 27/500\n6/6 [==============================] - 0s 154us/step - loss: 0.1127\nEpoch 28/500\n6/6 [==============================] - 0s 164us/step - loss: 0.0887\nEpoch 29/500\n6/6 [==============================] - 0s 149us/step - loss: 0.0698\nEpoch 30/500\n6/6 [==============================] - 0s 126us/step - loss: 0.0549\nEpoch 31/500\n6/6 [==============================] - 0s 114us/step - loss: 0.0433\nEpoch 32/500\n6/6 [==============================] - 0s 125us/step - loss: 0.0341\nEpoch 33/500\n6/6 [==============================] - 0s 131us/step - loss: 0.0268\nEpoch 34/500\n6/6 [==============================] - 0s 117us/step - loss: 0.0211\nEpoch 35/500\n6/6 [==============================] - 0s 121us/step - loss: 0.0166\nEpoch 36/500\n6/6 [==============================] - 0s 118us/step - loss: 0.0131\nEpoch 37/500\n6/6 [==============================] - 0s 118us/step - loss: 0.0103\nEpoch 38/500\n6/6 [==============================] - 0s 119us/step - loss: 0.0082\nEpoch 39/500\n6/6 [==============================] - 0s 117us/step - loss: 0.0064\nEpoch 40/500\n6/6 [==============================] - 0s 129us/step - loss: 0.0051\nEpoch 41/500\n6/6 [==============================] - 0s 109us/step - loss: 0.0040\nEpoch 42/500\n6/6 [==============================] - 0s 116us/step - loss: 0.0032\nEpoch 43/500\n6/6 [==============================] - 0s 107us/step - loss: 0.0025\nEpoch 44/500\n6/6 [==============================] - 0s 112us/step - loss: 0.0020\nEpoch 45/500\n6/6 [==============================] - 0s 122us/step - loss: 0.0016\nEpoch 46/500\n6/6 [==============================] - 0s 143us/step - loss: 0.0013\nEpoch 47/500\n6/6 [==============================] - 0s 136us/step - loss: 0.0010\nEpoch 48/500\n6/6 [==============================] - 0s 116us/step - loss: 8.2458e-04\nEpoch 49/500\n6/6 [==============================] - 0s 134us/step - loss: 6.6677e-04\nEpoch 50/500\n6/6 [==============================] - 0s 114us/step - loss: 5.4223e-04\nEpoch 51/500\n6/6 [==============================] - 0s 152us/step - loss: 4.4389e-04\nEpoch 52/500\n6/6 [==============================] - 0s 131us/step - loss: 3.6617e-04\nEpoch 53/500\n6/6 [==============================] - 0s 162us/step - loss: 3.0467e-04\nEpoch 54/500\n6/6 [==============================] - 0s 135us/step - loss: 2.5594e-04\nEpoch 55/500\n6/6 [==============================] - 0s 124us/step - loss: 2.1727e-04\nEpoch 56/500\n6/6 [==============================] - 0s 116us/step - loss: 1.8652e-04\nEpoch 57/500\n6/6 [==============================] - 0s 112us/step - loss: 1.6201e-04\nEpoch 58/500\n6/6 [==============================] - 0s 129us/step - loss: 1.4241e-04\nEpoch 59/500\n6/6 [==============================] - 0s 127us/step - loss: 1.2668e-04\nEpoch 60/500\n6/6 [==============================] - 0s 120us/step - loss: 1.1401e-04\nEpoch 61/500\n6/6 [==============================] - 0s 132us/step - loss: 1.0375e-04\nEpoch 62/500\n6/6 [==============================] - 0s 176us/step - loss: 9.5379e-05\nEpoch 63/500\n6/6 [==============================] - 0s 124us/step - loss: 8.8515e-05\nEpoch 64/500\n6/6 [==============================] - 0s 110us/step - loss: 8.2838e-05\nEpoch 65/500\n6/6 [==============================] - 0s 110us/step - loss: 7.8103e-05\nEpoch 66/500\n6/6 [==============================] - 0s 131us/step - loss: 7.4109e-05\nEpoch 67/500\n6/6 [==============================] - 0s 140us/step - loss: 7.0707e-05\nEpoch 68/500\n6/6 [==============================] - 0s 133us/step - loss: 6.7776e-05\nEpoch 69/500\n6/6 [==============================] - 0s 143us/step - loss: 6.5221e-05\nEpoch 70/500\n6/6 [==============================] - 0s 138us/step - loss: 6.2964e-05\nEpoch 71/500\n6/6 [==============================] - 0s 130us/step - loss: 6.0952e-05\nEpoch 72/500\n6/6 [==============================] - 0s 152us/step - loss: 5.9133e-05\nEpoch 73/500\n6/6 [==============================] - 0s 126us/step - loss: 5.7474e-05\nEpoch 74/500\n6/6 [==============================] - 0s 141us/step - loss: 5.5941e-05\nEpoch 75/500\n6/6 [==============================] - 0s 120us/step - loss: 5.4516e-05\nEpoch 76/500\n6/6 [==============================] - 0s 101us/step - loss: 5.3179e-05\nEpoch 77/500\n6/6 [==============================] - 0s 123us/step - loss: 5.1917e-05\nEpoch 78/500\n6/6 [==============================] - 0s 157us/step - loss: 5.0715e-05\nEpoch 79/500\n6/6 [==============================] - 0s 152us/step - loss: 4.9569e-05\nEpoch 80/500\n6/6 [==============================] - 0s 148us/step - loss: 4.8467e-05\nEpoch 81/500\n6/6 [==============================] - 0s 147us/step - loss: 4.7406e-05\nEpoch 82/500\n6/6 [==============================] - 0s 122us/step - loss: 4.6381e-05\nEpoch 83/500\n6/6 [==============================] - 0s 144us/step - loss: 4.5388e-05\nEpoch 84/500\n6/6 [==============================] - 0s 155us/step - loss: 4.4423e-05\nEpoch 85/500\n6/6 [==============================] - 0s 124us/step - loss: 4.3485e-05\nEpoch 86/500\n6/6 [==============================] - 0s 133us/step - loss: 4.2572e-05\nEpoch 87/500\n6/6 [==============================] - 0s 122us/step - loss: 4.1682e-05\nEpoch 88/500\n6/6 [==============================] - 0s 107us/step - loss: 4.0814e-05\nEpoch 89/500\n6/6 [==============================] - 0s 125us/step - loss: 3.9965e-05\nEpoch 90/500\n6/6 [==============================] - 0s 118us/step - loss: 3.9137e-05\nEpoch 91/500\n6/6 [==============================] - 0s 112us/step - loss: 3.8327e-05\nEpoch 92/500\n6/6 [==============================] - 0s 129us/step - loss: 3.7535e-05\nEpoch 93/500\n6/6 [==============================] - 0s 128us/step - loss: 3.6761e-05\nEpoch 94/500\n6/6 [==============================] - 0s 130us/step - loss: 3.6002e-05\nEpoch 95/500\n6/6 [==============================] - 0s 141us/step - loss: 3.5261e-05\nEpoch 96/500\n6/6 [==============================] - 0s 126us/step - loss: 3.4534e-05\nEpoch 97/500\n6/6 [==============================] - 0s 151us/step - loss: 3.3825e-05\nEpoch 98/500\n6/6 [==============================] - 0s 161us/step - loss: 3.3128e-05\nEpoch 99/500\n6/6 [==============================] - 0s 126us/step - loss: 3.2447e-05\nEpoch 100/500\n6/6 [==============================] - 0s 175us/step - loss: 3.1780e-05\nEpoch 101/500\n6/6 [==============================] - 0s 166us/step - loss: 3.1126e-05\nEpoch 102/500\n6/6 [==============================] - 0s 148us/step - loss: 3.0486e-05\nEpoch 103/500\n6/6 [==============================] - 0s 121us/step - loss: 2.9860e-05\nEpoch 104/500\n6/6 [==============================] - 0s 127us/step - loss: 2.9246e-05\nEpoch 105/500\n6/6 [==============================] - 0s 135us/step - loss: 2.8645e-05\nEpoch 106/500\n6/6 [==============================] - 0s 150us/step - loss: 2.8056e-05\nEpoch 107/500\n6/6 [==============================] - 0s 158us/step - loss: 2.7481e-05\nEpoch 108/500\n6/6 [==============================] - 0s 133us/step - loss: 2.6916e-05\nEpoch 109/500\n6/6 [==============================] - 0s 165us/step - loss: 2.6363e-05\nEpoch 110/500\n6/6 [==============================] - 0s 149us/step - loss: 2.5821e-05\nEpoch 111/500\n6/6 [==============================] - 0s 144us/step - loss: 2.5290e-05\nEpoch 112/500\n6/6 [==============================] - 0s 128us/step - loss: 2.4771e-05\nEpoch 113/500\n6/6 [==============================] - 0s 116us/step - loss: 2.4263e-05\nEpoch 114/500\n6/6 [==============================] - 0s 124us/step - loss: 2.3763e-05\nEpoch 115/500\n6/6 [==============================] - 0s 117us/step - loss: 2.3276e-05\nEpoch 116/500\n6/6 [==============================] - 0s 119us/step - loss: 2.2798e-05\nEpoch 117/500\n6/6 [==============================] - 0s 185us/step - loss: 2.2328e-05\nEpoch 118/500\n6/6 [==============================] - 0s 217us/step - loss: 2.1871e-05\nEpoch 119/500\n6/6 [==============================] - 0s 169us/step - loss: 2.1421e-05\nEpoch 120/500\n6/6 [==============================] - 0s 133us/step - loss: 2.0981e-05\nEpoch 121/500\n6/6 [==============================] - 0s 123us/step - loss: 2.0551e-05\nEpoch 122/500\n6/6 [==============================] - 0s 116us/step - loss: 2.0128e-05\nEpoch 123/500\n6/6 [==============================] - 0s 127us/step - loss: 1.9715e-05\nEpoch 124/500\n6/6 [==============================] - 0s 129us/step - loss: 1.9310e-05\nEpoch 125/500\n6/6 [==============================] - 0s 158us/step - loss: 1.8914e-05\nEpoch 126/500\n6/6 [==============================] - 0s 172us/step - loss: 1.8525e-05\nEpoch 127/500\n6/6 [==============================] - 0s 182us/step - loss: 1.8145e-05\nEpoch 128/500\n6/6 [==============================] - 0s 139us/step - loss: 1.7772e-05\nEpoch 129/500\n6/6 [==============================] - 0s 114us/step - loss: 1.7407e-05\nEpoch 130/500\n6/6 [==============================] - 0s 108us/step - loss: 1.7049e-05\nEpoch 131/500\n6/6 [==============================] - 0s 136us/step - loss: 1.6699e-05\nEpoch 132/500\n6/6 [==============================] - 0s 133us/step - loss: 1.6355e-05\nEpoch 133/500\n6/6 [==============================] - 0s 115us/step - loss: 1.6020e-05\nEpoch 134/500\n6/6 [==============================] - 0s 117us/step - loss: 1.5691e-05\nEpoch 135/500\n6/6 [==============================] - 0s 161us/step - loss: 1.5369e-05\nEpoch 136/500\n6/6 [==============================] - 0s 155us/step - loss: 1.5052e-05\nEpoch 137/500\n6/6 [==============================] - 0s 108us/step - loss: 1.4743e-05\nEpoch 138/500\n6/6 [==============================] - 0s 131us/step - loss: 1.4441e-05\nEpoch 139/500\n6/6 [==============================] - 0s 109us/step - loss: 1.4144e-05\nEpoch 140/500\n6/6 [==============================] - 0s 128us/step - loss: 1.3854e-05\nEpoch 141/500\n6/6 [==============================] - 0s 111us/step - loss: 1.3569e-05\nEpoch 142/500\n6/6 [==============================] - 0s 153us/step - loss: 1.3290e-05\nEpoch 143/500\n6/6 [==============================] - 0s 155us/step - loss: 1.3017e-05\nEpoch 144/500\n6/6 [==============================] - 0s 129us/step - loss: 1.2750e-05\nEpoch 145/500\n6/6 [==============================] - 0s 105us/step - loss: 1.2488e-05\nEpoch 146/500\n6/6 [==============================] - 0s 107us/step - loss: 1.2232e-05\nEpoch 147/500\n6/6 [==============================] - 0s 111us/step - loss: 1.1980e-05\nEpoch 148/500\n6/6 [==============================] - 0s 118us/step - loss: 1.1735e-05\nEpoch 149/500\n6/6 [==============================] - 0s 152us/step - loss: 1.1493e-05\nEpoch 150/500\n6/6 [==============================] - 0s 120us/step - loss: 1.1258e-05\nEpoch 151/500\n6/6 [==============================] - 0s 127us/step - loss: 1.1026e-05\nEpoch 152/500\n6/6 [==============================] - 0s 117us/step - loss: 1.0800e-05\nEpoch 153/500\n6/6 [==============================] - 0s 100us/step - loss: 1.0578e-05\nEpoch 154/500\n6/6 [==============================] - 0s 93us/step - loss: 1.0361e-05\nEpoch 155/500\n6/6 [==============================] - 0s 105us/step - loss: 1.0148e-05\nEpoch 156/500\n6/6 [==============================] - 0s 142us/step - loss: 9.9390e-06\nEpoch 157/500\n6/6 [==============================] - 0s 124us/step - loss: 9.7356e-06\nEpoch 158/500\n6/6 [==============================] - 0s 143us/step - loss: 9.5355e-06\nEpoch 159/500\n6/6 [==============================] - 0s 131us/step - loss: 9.3398e-06\nEpoch 160/500\n6/6 [==============================] - 0s 124us/step - loss: 9.1474e-06\nEpoch 161/500\n6/6 [==============================] - 0s 137us/step - loss: 8.9598e-06\nEpoch 162/500\n6/6 [==============================] - 0s 139us/step - loss: 8.7757e-06\nEpoch 163/500\n6/6 [==============================] - 0s 146us/step - loss: 8.5954e-06\nEpoch 164/500\n6/6 [==============================] - 0s 164us/step - loss: 8.4188e-06\nEpoch 165/500\n6/6 [==============================] - 0s 116us/step - loss: 8.2452e-06\nEpoch 166/500\n6/6 [==============================] - 0s 135us/step - loss: 8.0767e-06\nEpoch 167/500\n6/6 [==============================] - 0s 149us/step - loss: 7.9103e-06\nEpoch 168/500\n6/6 [==============================] - 0s 220us/step - loss: 7.7484e-06\nEpoch 169/500\n6/6 [==============================] - 0s 143us/step - loss: 7.5892e-06\nEpoch 170/500\n6/6 [==============================] - 0s 155us/step - loss: 7.4332e-06\nEpoch 171/500\n6/6 [==============================] - 0s 156us/step - loss: 7.2802e-06\nEpoch 172/500\n6/6 [==============================] - 0s 152us/step - loss: 7.1305e-06\nEpoch 173/500\n6/6 [==============================] - 0s 149us/step - loss: 6.9842e-06\nEpoch 174/500\n6/6 [==============================] - 0s 143us/step - loss: 6.8411e-06\nEpoch 175/500\n6/6 [==============================] - 0s 116us/step - loss: 6.7008e-06\nEpoch 176/500\n6/6 [==============================] - 0s 129us/step - loss: 6.5629e-06\nEpoch 177/500\n6/6 [==============================] - 0s 159us/step - loss: 6.4276e-06\nEpoch 178/500\n6/6 [==============================] - 0s 163us/step - loss: 6.2958e-06\nEpoch 179/500\n6/6 [==============================] - 0s 130us/step - loss: 6.1663e-06\nEpoch 180/500\n6/6 [==============================] - 0s 165us/step - loss: 6.0392e-06\nEpoch 181/500\n6/6 [==============================] - 0s 142us/step - loss: 5.9155e-06\nEpoch 182/500\n6/6 [==============================] - 0s 175us/step - loss: 5.7942e-06\nEpoch 183/500\n6/6 [==============================] - 0s 167us/step - loss: 5.6750e-06\nEpoch 184/500\n6/6 [==============================] - 0s 137us/step - loss: 5.5581e-06\nEpoch 185/500\n6/6 [==============================] - 0s 173us/step - loss: 5.4441e-06\nEpoch 186/500\n6/6 [==============================] - 0s 164us/step - loss: 5.3321e-06\nEpoch 187/500\n6/6 [==============================] - 0s 115us/step - loss: 5.2225e-06\nEpoch 188/500\n6/6 [==============================] - 0s 119us/step - loss: 5.1156e-06\nEpoch 189/500\n6/6 [==============================] - 0s 109us/step - loss: 5.0105e-06\nEpoch 190/500\n6/6 [==============================] - 0s 109us/step - loss: 4.9073e-06\nEpoch 191/500\n6/6 [==============================] - 0s 108us/step - loss: 4.8067e-06\nEpoch 192/500\n6/6 [==============================] - 0s 159us/step - loss: 4.7081e-06\nEpoch 193/500\n6/6 [==============================] - 0s 120us/step - loss: 4.6112e-06\nEpoch 194/500\n6/6 [==============================] - 0s 117us/step - loss: 4.5165e-06\nEpoch 195/500\n6/6 [==============================] - 0s 136us/step - loss: 4.4238e-06\nEpoch 196/500\n6/6 [==============================] - 0s 140us/step - loss: 4.3326e-06\nEpoch 197/500\n6/6 [==============================] - 0s 162us/step - loss: 4.2434e-06\nEpoch 198/500\n6/6 [==============================] - 0s 180us/step - loss: 4.1561e-06\nEpoch 199/500\n6/6 [==============================] - 0s 159us/step - loss: 4.0706e-06\nEpoch 200/500\n6/6 [==============================] - 0s 130us/step - loss: 3.9869e-06\nEpoch 201/500\n6/6 [==============================] - 0s 112us/step - loss: 3.9053e-06\nEpoch 202/500\n6/6 [==============================] - 0s 127us/step - loss: 3.8251e-06\nEpoch 203/500\n6/6 [==============================] - 0s 139us/step - loss: 3.7462e-06\nEpoch 204/500\n6/6 [==============================] - 0s 158us/step - loss: 3.6697e-06\nEpoch 205/500\n6/6 [==============================] - 0s 148us/step - loss: 3.5941e-06\nEpoch 206/500\n6/6 [==============================] - 0s 129us/step - loss: 3.5204e-06\nEpoch 207/500\n6/6 [==============================] - 0s 114us/step - loss: 3.4481e-06\nEpoch 208/500\n6/6 [==============================] - 0s 127us/step - loss: 3.3773e-06\nEpoch 209/500\n6/6 [==============================] - 0s 125us/step - loss: 3.3079e-06\nEpoch 210/500\n6/6 [==============================] - 0s 145us/step - loss: 3.2400e-06\nEpoch 211/500\n6/6 [==============================] - 0s 169us/step - loss: 3.1736e-06\nEpoch 212/500\n6/6 [==============================] - 0s 127us/step - loss: 3.1080e-06\nEpoch 213/500\n6/6 [==============================] - 0s 114us/step - loss: 3.0440e-06\nEpoch 214/500\n6/6 [==============================] - 0s 141us/step - loss: 2.9816e-06\nEpoch 215/500\n6/6 [==============================] - 0s 122us/step - loss: 2.9205e-06\nEpoch 216/500\n6/6 [==============================] - 0s 97us/step - loss: 2.8605e-06\nEpoch 217/500\n6/6 [==============================] - 0s 143us/step - loss: 2.8017e-06\nEpoch 218/500\n6/6 [==============================] - 0s 146us/step - loss: 2.7441e-06\nEpoch 219/500\n6/6 [==============================] - 0s 123us/step - loss: 2.6878e-06\nEpoch 220/500\n6/6 [==============================] - 0s 119us/step - loss: 2.6326e-06\nEpoch 221/500\n6/6 [==============================] - 0s 145us/step - loss: 2.5785e-06\nEpoch 222/500\n6/6 [==============================] - 0s 157us/step - loss: 2.5256e-06\nEpoch 223/500\n6/6 [==============================] - 0s 143us/step - loss: 2.4737e-06\nEpoch 224/500\n6/6 [==============================] - 0s 152us/step - loss: 2.4230e-06\nEpoch 225/500\n6/6 [==============================] - 0s 120us/step - loss: 2.3730e-06\nEpoch 226/500\n6/6 [==============================] - 0s 119us/step - loss: 2.3243e-06\nEpoch 227/500\n6/6 [==============================] - 0s 121us/step - loss: 2.2765e-06\nEpoch 228/500\n6/6 [==============================] - 0s 116us/step - loss: 2.2299e-06\nEpoch 229/500\n6/6 [==============================] - 0s 153us/step - loss: 2.1840e-06\nEpoch 230/500\n6/6 [==============================] - 0s 306us/step - loss: 2.1390e-06\nEpoch 231/500\n6/6 [==============================] - 0s 132us/step - loss: 2.0950e-06\nEpoch 232/500\n6/6 [==============================] - 0s 161us/step - loss: 2.0521e-06\nEpoch 233/500\n6/6 [==============================] - 0s 129us/step - loss: 2.0101e-06\nEpoch 234/500\n6/6 [==============================] - 0s 125us/step - loss: 1.9689e-06\nEpoch 235/500\n6/6 [==============================] - 0s 109us/step - loss: 1.9283e-06\nEpoch 236/500\n6/6 [==============================] - 0s 109us/step - loss: 1.8885e-06\nEpoch 237/500\n6/6 [==============================] - 0s 99us/step - loss: 1.8500e-06\nEpoch 238/500\n6/6 [==============================] - 0s 130us/step - loss: 1.8119e-06\nEpoch 239/500\n6/6 [==============================] - 0s 118us/step - loss: 1.7749e-06\nEpoch 240/500\n6/6 [==============================] - 0s 108us/step - loss: 1.7383e-06\nEpoch 241/500\n6/6 [==============================] - 0s 94us/step - loss: 1.7025e-06\nEpoch 242/500\n6/6 [==============================] - 0s 154us/step - loss: 1.6676e-06\nEpoch 243/500\n6/6 [==============================] - 0s 174us/step - loss: 1.6335e-06\nEpoch 244/500\n6/6 [==============================] - 0s 181us/step - loss: 1.5997e-06\nEpoch 245/500\n6/6 [==============================] - 0s 186us/step - loss: 1.5671e-06\nEpoch 246/500\n6/6 [==============================] - 0s 165us/step - loss: 1.5348e-06\nEpoch 247/500\n6/6 [==============================] - 0s 156us/step - loss: 1.5033e-06\nEpoch 248/500\n6/6 [==============================] - 0s 136us/step - loss: 1.4726e-06\nEpoch 249/500\n6/6 [==============================] - 0s 151us/step - loss: 1.4421e-06\nEpoch 250/500\n6/6 [==============================] - 0s 160us/step - loss: 1.4127e-06\nEpoch 251/500\n6/6 [==============================] - 0s 134us/step - loss: 1.3835e-06\nEpoch 252/500\n6/6 [==============================] - 0s 130us/step - loss: 1.3552e-06\nEpoch 253/500\n6/6 [==============================] - 0s 109us/step - loss: 1.3274e-06\nEpoch 254/500\n6/6 [==============================] - 0s 104us/step - loss: 1.3001e-06\nEpoch 255/500\n6/6 [==============================] - 0s 149us/step - loss: 1.2733e-06\nEpoch 256/500\n6/6 [==============================] - 0s 134us/step - loss: 1.2473e-06\nEpoch 257/500\n6/6 [==============================] - 0s 171us/step - loss: 1.2216e-06\nEpoch 258/500\n6/6 [==============================] - 0s 144us/step - loss: 1.1964e-06\nEpoch 259/500\n6/6 [==============================] - 0s 130us/step - loss: 1.1717e-06\nEpoch 260/500\n6/6 [==============================] - 0s 139us/step - loss: 1.1477e-06\nEpoch 261/500\n6/6 [==============================] - 0s 128us/step - loss: 1.1242e-06\nEpoch 262/500\n6/6 [==============================] - 0s 122us/step - loss: 1.1010e-06\nEpoch 263/500\n6/6 [==============================] - 0s 101us/step - loss: 1.0784e-06\nEpoch 264/500\n6/6 [==============================] - 0s 118us/step - loss: 1.0562e-06\nEpoch 265/500\n6/6 [==============================] - 0s 121us/step - loss: 1.0346e-06\nEpoch 266/500\n6/6 [==============================] - 0s 105us/step - loss: 1.0134e-06\nEpoch 267/500\n6/6 [==============================] - 0s 100us/step - loss: 9.9246e-07\nEpoch 268/500\n6/6 [==============================] - 0s 110us/step - loss: 9.7208e-07\nEpoch 269/500\n6/6 [==============================] - 0s 122us/step - loss: 9.5236e-07\nEpoch 270/500\n6/6 [==============================] - 0s 134us/step - loss: 9.3280e-07\nEpoch 271/500\n6/6 [==============================] - 0s 148us/step - loss: 9.1338e-07\nEpoch 272/500\n6/6 [==============================] - 0s 117us/step - loss: 8.9463e-07\nEpoch 273/500\n6/6 [==============================] - 0s 97us/step - loss: 8.7640e-07\nEpoch 274/500\n6/6 [==============================] - 0s 101us/step - loss: 8.5831e-07\nEpoch 275/500\n6/6 [==============================] - 0s 113us/step - loss: 8.4068e-07\nEpoch 276/500\n6/6 [==============================] - 0s 103us/step - loss: 8.2352e-07\nEpoch 277/500\n6/6 [==============================] - 0s 101us/step - loss: 8.0661e-07\nEpoch 278/500\n6/6 [==============================] - 0s 130us/step - loss: 7.9010e-07\nEpoch 279/500\n6/6 [==============================] - 0s 119us/step - loss: 7.7378e-07\nEpoch 280/500\n6/6 [==============================] - 0s 112us/step - loss: 7.5804e-07\nEpoch 281/500\n6/6 [==============================] - 0s 108us/step - loss: 7.4235e-07\nEpoch 282/500\n6/6 [==============================] - 0s 111us/step - loss: 7.2709e-07\nEpoch 283/500\n6/6 [==============================] - 0s 96us/step - loss: 7.1226e-07\nEpoch 284/500\n6/6 [==============================] - 0s 95us/step - loss: 6.9748e-07\nEpoch 285/500\n6/6 [==============================] - 0s 106us/step - loss: 6.8317e-07\nEpoch 286/500\n6/6 [==============================] - 0s 122us/step - loss: 6.6931e-07\nEpoch 287/500\n6/6 [==============================] - 0s 133us/step - loss: 6.5555e-07\nEpoch 288/500\n6/6 [==============================] - 0s 115us/step - loss: 6.4193e-07\nEpoch 289/500\n6/6 [==============================] - 0s 144us/step - loss: 6.2894e-07\nEpoch 290/500\n6/6 [==============================] - 0s 101us/step - loss: 6.1586e-07\nEpoch 291/500\n6/6 [==============================] - 0s 103us/step - loss: 6.0318e-07\nEpoch 292/500\n6/6 [==============================] - 0s 109us/step - loss: 5.9069e-07\nEpoch 293/500\n6/6 [==============================] - 0s 99us/step - loss: 5.7860e-07\nEpoch 294/500\n6/6 [==============================] - 0s 102us/step - loss: 5.6689e-07\nEpoch 295/500\n6/6 [==============================] - 0s 112us/step - loss: 5.5525e-07\nEpoch 296/500\n6/6 [==============================] - 0s 130us/step - loss: 5.4378e-07\nEpoch 297/500\n6/6 [==============================] - 0s 96us/step - loss: 5.3259e-07\nEpoch 298/500\n6/6 [==============================] - 0s 124us/step - loss: 5.2177e-07\nEpoch 299/500\n6/6 [==============================] - 0s 135us/step - loss: 5.1109e-07\nEpoch 300/500\n6/6 [==============================] - 0s 103us/step - loss: 5.0057e-07\nEpoch 301/500\n6/6 [==============================] - 0s 115us/step - loss: 4.9032e-07\nEpoch 302/500\n6/6 [==============================] - 0s 132us/step - loss: 4.8020e-07\nEpoch 303/500\n6/6 [==============================] - 0s 139us/step - loss: 4.7030e-07\nEpoch 304/500\n6/6 [==============================] - 0s 143us/step - loss: 4.6066e-07\nEpoch 305/500\n6/6 [==============================] - 0s 156us/step - loss: 4.5120e-07\nEpoch 306/500\n6/6 [==============================] - 0s 160us/step - loss: 4.4207e-07\nEpoch 307/500\n6/6 [==============================] - 0s 137us/step - loss: 4.3301e-07\nEpoch 308/500\n6/6 [==============================] - 0s 127us/step - loss: 4.2412e-07\nEpoch 309/500\n6/6 [==============================] - 0s 142us/step - loss: 4.1530e-07\nEpoch 310/500\n6/6 [==============================] - 0s 146us/step - loss: 4.0689e-07\nEpoch 311/500\n6/6 [==============================] - 0s 154us/step - loss: 3.9848e-07\nEpoch 312/500\n6/6 [==============================] - 0s 134us/step - loss: 3.9025e-07\nEpoch 313/500\n6/6 [==============================] - 0s 124us/step - loss: 3.8223e-07\nEpoch 314/500\n6/6 [==============================] - 0s 138us/step - loss: 3.7433e-07\nEpoch 315/500\n6/6 [==============================] - 0s 148us/step - loss: 3.6668e-07\nEpoch 316/500\n6/6 [==============================] - 0s 141us/step - loss: 3.5929e-07\nEpoch 317/500\n6/6 [==============================] - 0s 121us/step - loss: 3.5194e-07\nEpoch 318/500\n6/6 [==============================] - 0s 157us/step - loss: 3.4462e-07\nEpoch 319/500\n6/6 [==============================] - 0s 137us/step - loss: 3.3759e-07\nEpoch 320/500\n6/6 [==============================] - 0s 134us/step - loss: 3.3066e-07\nEpoch 321/500\n6/6 [==============================] - 0s 111us/step - loss: 3.2391e-07\nEpoch 322/500\n6/6 [==============================] - 0s 115us/step - loss: 3.1718e-07\nEpoch 323/500\n6/6 [==============================] - 0s 134us/step - loss: 3.1069e-07\nEpoch 324/500\n6/6 [==============================] - 0s 143us/step - loss: 3.0432e-07\nEpoch 325/500\n6/6 [==============================] - 0s 116us/step - loss: 2.9809e-07\nEpoch 326/500\n6/6 [==============================] - 0s 112us/step - loss: 2.9194e-07\nEpoch 327/500\n6/6 [==============================] - 0s 110us/step - loss: 2.8594e-07\nEpoch 328/500\n6/6 [==============================] - 0s 114us/step - loss: 2.8005e-07\nEpoch 329/500\n6/6 [==============================] - 0s 115us/step - loss: 2.7437e-07\nEpoch 330/500\n6/6 [==============================] - 0s 123us/step - loss: 2.6869e-07\nEpoch 331/500\n6/6 [==============================] - 0s 109us/step - loss: 2.6309e-07\nEpoch 332/500\n6/6 [==============================] - 0s 129us/step - loss: 2.5778e-07\nEpoch 333/500\n6/6 [==============================] - 0s 128us/step - loss: 2.5253e-07\nEpoch 334/500\n6/6 [==============================] - 0s 107us/step - loss: 2.4735e-07\nEpoch 335/500\n6/6 [==============================] - 0s 116us/step - loss: 2.4230e-07\nEpoch 336/500\n6/6 [==============================] - 0s 124us/step - loss: 2.3725e-07\nEpoch 337/500\n6/6 [==============================] - 0s 109us/step - loss: 2.3237e-07\nEpoch 338/500\n6/6 [==============================] - 0s 108us/step - loss: 2.2766e-07\nEpoch 339/500\n6/6 [==============================] - 0s 111us/step - loss: 2.2295e-07\nEpoch 340/500\n6/6 [==============================] - 0s 111us/step - loss: 2.1837e-07\nEpoch 341/500\n6/6 [==============================] - 0s 139us/step - loss: 2.1397e-07\nEpoch 342/500\n6/6 [==============================] - 0s 119us/step - loss: 2.0949e-07\nEpoch 343/500\n6/6 [==============================] - 0s 134us/step - loss: 2.0523e-07\nEpoch 344/500\n6/6 [==============================] - 0s 132us/step - loss: 2.0104e-07\nEpoch 345/500\n6/6 [==============================] - 0s 138us/step - loss: 1.9692e-07\nEpoch 346/500\n6/6 [==============================] - 0s 111us/step - loss: 1.9290e-07\nEpoch 347/500\n6/6 [==============================] - 0s 106us/step - loss: 1.8883e-07\nEpoch 348/500\n6/6 [==============================] - 0s 112us/step - loss: 1.8509e-07\nEpoch 349/500\n6/6 [==============================] - 0s 129us/step - loss: 1.8124e-07\nEpoch 350/500\n6/6 [==============================] - 0s 118us/step - loss: 1.7755e-07\nEpoch 351/500\n6/6 [==============================] - 0s 112us/step - loss: 1.7382e-07\nEpoch 352/500\n6/6 [==============================] - 0s 134us/step - loss: 1.7023e-07\nEpoch 353/500\n6/6 [==============================] - 0s 130us/step - loss: 1.6677e-07\nEpoch 354/500\n6/6 [==============================] - 0s 138us/step - loss: 1.6330e-07\nEpoch 355/500\n6/6 [==============================] - 0s 112us/step - loss: 1.5998e-07\nEpoch 356/500\n6/6 [==============================] - 0s 112us/step - loss: 1.5668e-07\nEpoch 357/500\n6/6 [==============================] - 0s 120us/step - loss: 1.5344e-07\nEpoch 358/500\n6/6 [==============================] - 0s 116us/step - loss: 1.5036e-07\nEpoch 359/500\n6/6 [==============================] - 0s 112us/step - loss: 1.4720e-07\nEpoch 360/500\n6/6 [==============================] - 0s 130us/step - loss: 1.4428e-07\nEpoch 361/500\n6/6 [==============================] - 0s 118us/step - loss: 1.4128e-07\nEpoch 362/500\n6/6 [==============================] - 0s 92us/step - loss: 1.3832e-07\nEpoch 363/500\n6/6 [==============================] - 0s 111us/step - loss: 1.3555e-07\nEpoch 364/500\n6/6 [==============================] - 0s 105us/step - loss: 1.3275e-07\nEpoch 365/500\n6/6 [==============================] - 0s 99us/step - loss: 1.3002e-07\nEpoch 366/500\n6/6 [==============================] - 0s 98us/step - loss: 1.2728e-07\nEpoch 367/500\n6/6 [==============================] - 0s 111us/step - loss: 1.2468e-07\nEpoch 368/500\n6/6 [==============================] - 0s 111us/step - loss: 1.2216e-07\nEpoch 369/500\n6/6 [==============================] - 0s 114us/step - loss: 1.1968e-07\nEpoch 370/500\n6/6 [==============================] - 0s 115us/step - loss: 1.1719e-07\nEpoch 371/500\n6/6 [==============================] - 0s 106us/step - loss: 1.1476e-07\nEpoch 372/500\n6/6 [==============================] - 0s 136us/step - loss: 1.1244e-07\nEpoch 373/500\n6/6 [==============================] - 0s 125us/step - loss: 1.1018e-07\nEpoch 374/500\n6/6 [==============================] - 0s 126us/step - loss: 1.0785e-07\nEpoch 375/500\n6/6 [==============================] - 0s 96us/step - loss: 1.0572e-07\nEpoch 376/500\n6/6 [==============================] - 0s 121us/step - loss: 1.0351e-07\nEpoch 377/500\n6/6 [==============================] - 0s 106us/step - loss: 1.0141e-07\nEpoch 378/500\n6/6 [==============================] - 0s 109us/step - loss: 9.9260e-08\nEpoch 379/500\n6/6 [==============================] - 0s 114us/step - loss: 9.7237e-08\nEpoch 380/500\n6/6 [==============================] - 0s 126us/step - loss: 9.5275e-08\nEpoch 381/500\n6/6 [==============================] - 0s 155us/step - loss: 9.3306e-08\nEpoch 382/500\n6/6 [==============================] - 0s 135us/step - loss: 9.1419e-08\nEpoch 383/500\n6/6 [==============================] - 0s 135us/step - loss: 8.9470e-08\nEpoch 384/500\n6/6 [==============================] - 0s 121us/step - loss: 8.7692e-08\nEpoch 385/500\n6/6 [==============================] - 0s 107us/step - loss: 8.5852e-08\nEpoch 386/500\n6/6 [==============================] - 0s 160us/step - loss: 8.4095e-08\nEpoch 387/500\n6/6 [==============================] - 0s 124us/step - loss: 8.2395e-08\nEpoch 388/500\n6/6 [==============================] - 0s 151us/step - loss: 8.0712e-08\nEpoch 389/500\n6/6 [==============================] - 0s 115us/step - loss: 7.9079e-08\nEpoch 390/500\n6/6 [==============================] - 0s 120us/step - loss: 7.7473e-08\nEpoch 391/500\n6/6 [==============================] - 0s 104us/step - loss: 7.5884e-08\nEpoch 392/500\n6/6 [==============================] - 0s 105us/step - loss: 7.4316e-08\nEpoch 393/500\n6/6 [==============================] - 0s 124us/step - loss: 7.2800e-08\nEpoch 394/500\n6/6 [==============================] - 0s 106us/step - loss: 7.1294e-08\nEpoch 395/500\n6/6 [==============================] - 0s 131us/step - loss: 6.9846e-08\nEpoch 396/500\n6/6 [==============================] - 0s 176us/step - loss: 6.8420e-08\nEpoch 397/500\n6/6 [==============================] - 0s 157us/step - loss: 6.7025e-08\nEpoch 398/500\n6/6 [==============================] - 0s 107us/step - loss: 6.5626e-08\nEpoch 399/500\n6/6 [==============================] - 0s 125us/step - loss: 6.4250e-08\nEpoch 400/500\n6/6 [==============================] - 0s 90us/step - loss: 6.2965e-08\nEpoch 401/500\n6/6 [==============================] - 0s 122us/step - loss: 6.1622e-08\nEpoch 402/500\n6/6 [==============================] - 0s 121us/step - loss: 6.0372e-08\nEpoch 403/500\n6/6 [==============================] - 0s 118us/step - loss: 5.9116e-08\nEpoch 404/500\n6/6 [==============================] - 0s 184us/step - loss: 5.7923e-08\nEpoch 405/500\n6/6 [==============================] - 0s 180us/step - loss: 5.6723e-08\nEpoch 406/500\n6/6 [==============================] - 0s 209us/step - loss: 5.5610e-08\nEpoch 407/500\n6/6 [==============================] - 0s 126us/step - loss: 5.4441e-08\nEpoch 408/500\n6/6 [==============================] - 0s 141us/step - loss: 5.3338e-08\nEpoch 409/500\n6/6 [==============================] - 0s 136us/step - loss: 5.2289e-08\nEpoch 410/500\n6/6 [==============================] - 0s 128us/step - loss: 5.1215e-08\nEpoch 411/500\n6/6 [==============================] - 0s 141us/step - loss: 5.0168e-08\nEpoch 412/500\n6/6 [==============================] - 0s 124us/step - loss: 4.9121e-08\nEpoch 413/500\n6/6 [==============================] - 0s 123us/step - loss: 4.8102e-08\nEpoch 414/500\n6/6 [==============================] - 0s 220us/step - loss: 4.7169e-08\nEpoch 415/500\n6/6 [==============================] - 0s 134us/step - loss: 4.6176e-08\nEpoch 416/500\n6/6 [==============================] - 0s 120us/step - loss: 4.5244e-08\nEpoch 417/500\n6/6 [==============================] - 0s 111us/step - loss: 4.4306e-08\nEpoch 418/500\n6/6 [==============================] - 0s 108us/step - loss: 4.3398e-08\nEpoch 419/500\n6/6 [==============================] - 0s 139us/step - loss: 4.2500e-08\nEpoch 420/500\n6/6 [==============================] - 0s 130us/step - loss: 4.1632e-08\nEpoch 421/500\n6/6 [==============================] - 0s 113us/step - loss: 4.0758e-08\nEpoch 422/500\n6/6 [==============================] - 0s 133us/step - loss: 3.9940e-08\nEpoch 423/500\n6/6 [==============================] - 0s 142us/step - loss: 3.9067e-08\nEpoch 424/500\n6/6 [==============================] - 0s 116us/step - loss: 3.8272e-08\nEpoch 425/500\n6/6 [==============================] - 0s 128us/step - loss: 3.7499e-08\nEpoch 426/500\n6/6 [==============================] - 0s 173us/step - loss: 3.6725e-08\nEpoch 427/500\n6/6 [==============================] - 0s 116us/step - loss: 3.5974e-08\nEpoch 428/500\n6/6 [==============================] - 0s 113us/step - loss: 3.5215e-08\nEpoch 429/500\n6/6 [==============================] - 0s 125us/step - loss: 3.4502e-08\nEpoch 430/500\n6/6 [==============================] - 0s 120us/step - loss: 3.3798e-08\nEpoch 431/500\n6/6 [==============================] - 0s 112us/step - loss: 3.3094e-08\nEpoch 432/500\n6/6 [==============================] - 0s 105us/step - loss: 3.2408e-08\nEpoch 433/500\n6/6 [==============================] - 0s 135us/step - loss: 3.1752e-08\nEpoch 434/500\n6/6 [==============================] - 0s 114us/step - loss: 3.1123e-08\nEpoch 435/500\n6/6 [==============================] - 0s 108us/step - loss: 3.0505e-08\nEpoch 436/500\n6/6 [==============================] - 0s 117us/step - loss: 2.9875e-08\nEpoch 437/500\n6/6 [==============================] - 0s 123us/step - loss: 2.9253e-08\nEpoch 438/500\n6/6 [==============================] - 0s 100us/step - loss: 2.8660e-08\nEpoch 439/500\n6/6 [==============================] - 0s 112us/step - loss: 2.8055e-08\nEpoch 440/500\n6/6 [==============================] - 0s 118us/step - loss: 2.7524e-08\nEpoch 441/500\n6/6 [==============================] - 0s 107us/step - loss: 2.6968e-08\nEpoch 442/500\n6/6 [==============================] - 0s 109us/step - loss: 2.6387e-08\nEpoch 443/500\n6/6 [==============================] - 0s 115us/step - loss: 2.5865e-08\nEpoch 444/500\n6/6 [==============================] - 0s 104us/step - loss: 2.5343e-08\nEpoch 445/500\n6/6 [==============================] - 0s 151us/step - loss: 2.4817e-08\nEpoch 446/500\n6/6 [==============================] - 0s 136us/step - loss: 2.4327e-08\nEpoch 447/500\n6/6 [==============================] - 0s 135us/step - loss: 2.3818e-08\nEpoch 448/500\n6/6 [==============================] - 0s 135us/step - loss: 2.3324e-08\nEpoch 449/500\n6/6 [==============================] - 0s 121us/step - loss: 2.2866e-08\nEpoch 450/500\n6/6 [==============================] - 0s 121us/step - loss: 2.2371e-08\nEpoch 451/500\n6/6 [==============================] - 0s 125us/step - loss: 2.1897e-08\nEpoch 452/500\n6/6 [==============================] - 0s 116us/step - loss: 2.1476e-08\nEpoch 453/500\n6/6 [==============================] - 0s 107us/step - loss: 2.1011e-08\nEpoch 454/500\n6/6 [==============================] - 0s 108us/step - loss: 2.0585e-08\nEpoch 455/500\n6/6 [==============================] - 0s 125us/step - loss: 2.0154e-08\nEpoch 456/500\n6/6 [==============================] - 0s 120us/step - loss: 1.9737e-08\nEpoch 457/500\n6/6 [==============================] - 0s 106us/step - loss: 1.9299e-08\nEpoch 458/500\n6/6 [==============================] - 0s 118us/step - loss: 1.8910e-08\nEpoch 459/500\n6/6 [==============================] - 0s 145us/step - loss: 1.8524e-08\nEpoch 460/500\n6/6 [==============================] - 0s 125us/step - loss: 1.8159e-08\nEpoch 461/500\n6/6 [==============================] - 0s 154us/step - loss: 1.7796e-08\nEpoch 462/500\n6/6 [==============================] - 0s 130us/step - loss: 1.7425e-08\nEpoch 463/500\n6/6 [==============================] - 0s 115us/step - loss: 1.7075e-08\nEpoch 464/500\n6/6 [==============================] - 0s 114us/step - loss: 1.6702e-08\nEpoch 465/500\n6/6 [==============================] - 0s 122us/step - loss: 1.6374e-08\nEpoch 466/500\n6/6 [==============================] - 0s 110us/step - loss: 1.6061e-08\nEpoch 467/500\n6/6 [==============================] - 0s 129us/step - loss: 1.5715e-08\nEpoch 468/500\n6/6 [==============================] - 0s 108us/step - loss: 1.5401e-08\nEpoch 469/500\n6/6 [==============================] - 0s 100us/step - loss: 1.5072e-08\nEpoch 470/500\n6/6 [==============================] - 0s 119us/step - loss: 1.4769e-08\nEpoch 471/500\n6/6 [==============================] - 0s 110us/step - loss: 1.4475e-08\nEpoch 472/500\n6/6 [==============================] - 0s 103us/step - loss: 1.4188e-08\nEpoch 473/500\n6/6 [==============================] - 0s 133us/step - loss: 1.3906e-08\nEpoch 474/500\n6/6 [==============================] - 0s 129us/step - loss: 1.3625e-08\nEpoch 475/500\n6/6 [==============================] - 0s 128us/step - loss: 1.3346e-08\nEpoch 476/500\n6/6 [==============================] - 0s 123us/step - loss: 1.3076e-08\nEpoch 477/500\n6/6 [==============================] - 0s 107us/step - loss: 1.2806e-08\nEpoch 478/500\n6/6 [==============================] - 0s 106us/step - loss: 1.2540e-08\nEpoch 479/500\n6/6 [==============================] - 0s 140us/step - loss: 1.2288e-08\nEpoch 480/500\n6/6 [==============================] - 0s 130us/step - loss: 1.2053e-08\nEpoch 481/500\n6/6 [==============================] - 0s 135us/step - loss: 1.1799e-08\nEpoch 482/500\n6/6 [==============================] - 0s 111us/step - loss: 1.1569e-08\nEpoch 483/500\n6/6 [==============================] - 0s 110us/step - loss: 1.1330e-08\nEpoch 484/500\n6/6 [==============================] - 0s 130us/step - loss: 1.1107e-08\nEpoch 485/500\n6/6 [==============================] - 0s 124us/step - loss: 1.0873e-08\nEpoch 486/500\n6/6 [==============================] - 0s 119us/step - loss: 1.0655e-08\nEpoch 487/500\n6/6 [==============================] - 0s 108us/step - loss: 1.0429e-08\nEpoch 488/500\n6/6 [==============================] - 0s 113us/step - loss: 1.0225e-08\nEpoch 489/500\n6/6 [==============================] - 0s 125us/step - loss: 1.0003e-08\nEpoch 490/500\n6/6 [==============================] - 0s 135us/step - loss: 9.7966e-09\nEpoch 491/500\n6/6 [==============================] - 0s 159us/step - loss: 9.6028e-09\nEpoch 492/500\n6/6 [==============================] - 0s 184us/step - loss: 9.4121e-09\nEpoch 493/500\n6/6 [==============================] - 0s 151us/step - loss: 9.2201e-09\nEpoch 494/500\n6/6 [==============================] - 0s 148us/step - loss: 9.0133e-09\nEpoch 495/500\n6/6 [==============================] - 0s 118us/step - loss: 8.8277e-09\nEpoch 496/500\n6/6 [==============================] - 0s 112us/step - loss: 8.6555e-09\nEpoch 497/500\n6/6 [==============================] - 0s 98us/step - loss: 8.4635e-09\nEpoch 498/500\n6/6 [==============================] - 0s 119us/step - loss: 8.2862e-09\nEpoch 499/500\n6/6 [==============================] - 0s 157us/step - loss: 8.1217e-09\nEpoch 500/500\n6/6 [==============================] - 0s 126us/step - loss: 7.9374e-09\n"
]
],
[
[
"Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:",
"_____no_output_____"
]
],
[
[
"print(model.predict([10.0]))",
"[[ 31.00025749]]\n"
]
],
[
[
"You might have thought 31, right? But it ended up being a little over. Why do you think that is? \n\nRemember that neural networks deal with probabilities, so given the data that we fed the NN with, it calculated that there is a very high probability that the relationship between X and Y is Y=3X+1, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 31, but not necessarily 31. \n\nAs you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5a73e26a8d5926f278c67701b6564f9a4f3877 | 9,424 | ipynb | Jupyter Notebook | list_tuple_dict.ipynb | RipsimeArutyunya/EconometricsPractice2022 | 8292b0f96ccbac2696ab1e3c3e25fa775136a0db | [
"MIT"
] | null | null | null | list_tuple_dict.ipynb | RipsimeArutyunya/EconometricsPractice2022 | 8292b0f96ccbac2696ab1e3c3e25fa775136a0db | [
"MIT"
] | null | null | null | list_tuple_dict.ipynb | RipsimeArutyunya/EconometricsPractice2022 | 8292b0f96ccbac2696ab1e3c3e25fa775136a0db | [
"MIT"
] | null | null | null | 18.263566 | 458 | 0.447687 | [
[
[
"ls = ['apple', 'banana', 10, 22.5]\nls5 = ['a', 'b', [1,2,3], ('a', 'b')]\nls",
"_____no_output_____"
],
[
"type(ls)",
"_____no_output_____"
],
[
"ls.append(10)\nls",
"_____no_output_____"
],
[
"tup = tuple(ls)\ntup2 = (1,2,3,4,5)\ntup = (1,) #(1) = integer\ntup",
"_____no_output_____"
],
[
"tup[0] = 1\ntup",
"_____no_output_____"
],
[
"ls[0] = 1\nls",
"_____no_output_____"
],
[
"len(ls) # 0 1 2 3 = 4 elements | -4 -3 -2 -1",
"_____no_output_____"
],
[
"ls[0] == ls[-4]",
"_____no_output_____"
],
[
"ls[:-2]",
"_____no_output_____"
],
[
"ls.remove(10)\nls",
"_____no_output_____"
],
[
"a = \"apple\"\nb = \"banana\"\nc = a + \" \" + b\nc\n",
"_____no_output_____"
],
[
"ls1 = [1,2,3,4,5]\nls2 = ['a', 'b', 'c', 'd']\nls1 + ls2",
"_____no_output_____"
],
[
"# list, find the length the list, find elements between second element and 3rd from the end\n# use append, use remove",
"_____no_output_____"
],
[
"d = {'key1':'value1'} #keys are immutable => number, string, tuple; alues can be of any type",
"_____no_output_____"
],
[
"type(d)",
"_____no_output_____"
],
[
"#ls[0] = 1\nd['key2'] = 'value2'\nd",
"_____no_output_____"
],
[
"d['key2']",
"_____no_output_____"
],
[
"d['key1'] = 'new_value1'\nd",
"_____no_output_____"
],
[
"len(d)",
"_____no_output_____"
],
[
"d.keys()",
"_____no_output_____"
],
[
"d.values()",
"_____no_output_____"
],
[
"d.items()",
"_____no_output_____"
],
[
"d.pop('key1')",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"#create dictionary, add element to dictionary, change element's value",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a78c5f0f01e6c1c3ad062f498e11928239f82 | 333,208 | ipynb | Jupyter Notebook | boqn/group_testing/notebooks/individual_interaction_contact_tracing.ipynb | RaulAstudillo06/BOQN | c5b2bb9e547e2489f856ebf86c749fb24eba1022 | [
"MIT"
] | null | null | null | boqn/group_testing/notebooks/individual_interaction_contact_tracing.ipynb | RaulAstudillo06/BOQN | c5b2bb9e547e2489f856ebf86c749fb24eba1022 | [
"MIT"
] | null | null | null | boqn/group_testing/notebooks/individual_interaction_contact_tracing.ipynb | RaulAstudillo06/BOQN | c5b2bb9e547e2489f856ebf86c749fb24eba1022 | [
"MIT"
] | 1 | 2022-03-09T02:32:42.000Z | 2022-03-09T02:32:42.000Z | 1,394.175732 | 325,398 | 0.944359 | [
[
[
"import sys\nimport os\n\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path + \"/src\")",
"_____no_output_____"
],
[
"from simulation import BaseSimulation\nfrom individual_interaction_population import IndividualInteractionPopulation\nfrom base_test_protocol import ContactTraceProtocol, QuarantineSymptomaticProtocol",
"_____no_output_____"
],
[
"import numpy as np\ndef prepare_pop(interactions_pp):\n n_agents = int(1E3)\n disease_length = 14\n quarantine_length = 14\n days_until_symptomatic = 7\n interaction_frequency_lambda = interactions_pp\n\n population = IndividualInteractionPopulation(n_agents,\n disease_length,\n quarantine_length,\n days_until_symptomatic,\n interaction_frequency_lambda,\n interaction_infection_pct=0.05,\n initial_prevalence=0.005)\n \n # select only a single individual to be infected:\n infected_agent = np.random.choice(range(n_agents))\n for agent_idx in range(n_agents):\n if agent_idx == infected_agent:\n population.infection_status[agent_idx] = True\n else:\n population.infection_status[agent_idx] = False\n return population",
"_____no_output_____"
],
[
"def run_simulation(interactions_pp, time_horizon, test_protocol, verbose=False):\n pop = prepare_pop(interactions_pp)\n \n simulation = BaseSimulation(pop, test_protocol, test_frequency=1, test_latency=0)\n for day in range(time_horizon):\n simulation.step()\n if verbose:\n print(\"Done simulating day {}\".format(day+1))\n \n return simulation",
"_____no_output_____"
],
[
"sim_results_notrace = {}\nsim_results_trace = {}\ninteractions_per_person_values = [1,2,3,4,5,6,7,8,9]\ntime_horizon = 200\n\nR0 = {}\nfor ipp in interactions_per_person_values:\n R0[ipp] = 0.05 * 7 * ipp\n print(\"R0 under symptomatic-only quarantine, under lambda = {}, is equal to {:.2f}\".format(ipp, 0.05 * 7 * ipp))\n\n\nfor interactions_pp in interactions_per_person_values:\n sim_results_notrace[interactions_pp] = []\n sim_results_trace[interactions_pp] = []\n \n for x in range(25):\n notrace_test = QuarantineSymptomaticProtocol()\n sim_results_notrace[interactions_pp].append(run_simulation(interactions_pp, time_horizon, notrace_test))\n \n trace_test = ContactTraceProtocol()\n sim_results_trace[interactions_pp].append(run_simulation(interactions_pp, time_horizon, trace_test))\n \n print(\"Done iteration for interactions_pp value {}\".format(interactions_pp))",
"R0 under symptomatic-only quarantine, under lambda = 1, is equal to 0.35\nR0 under symptomatic-only quarantine, under lambda = 2, is equal to 0.70\nR0 under symptomatic-only quarantine, under lambda = 3, is equal to 1.05\nR0 under symptomatic-only quarantine, under lambda = 4, is equal to 1.40\nR0 under symptomatic-only quarantine, under lambda = 5, is equal to 1.75\nR0 under symptomatic-only quarantine, under lambda = 6, is equal to 2.10\nR0 under symptomatic-only quarantine, under lambda = 7, is equal to 2.45\nR0 under symptomatic-only quarantine, under lambda = 8, is equal to 2.80\nR0 under symptomatic-only quarantine, under lambda = 9, is equal to 3.15\nDone iteration for interactions_pp value 1\nDone iteration for interactions_pp value 2\nDone iteration for interactions_pp value 3\nDone iteration for interactions_pp value 4\nDone iteration for interactions_pp value 5\nDone iteration for interactions_pp value 6\nDone iteration for interactions_pp value 7\nDone iteration for interactions_pp value 8\nDone iteration for interactions_pp value 9\n"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.rcParams['font.size'] = 12\n\ndef add_plot(sim, days, color):\n infections = [sim.summary_population_data[day]['cumulative_num_infected'] for day in days]\n plt.plot(days, infections, linewidth=10.0, alpha=0.1, color=color)\n\n\nplt.figure(figsize=(20,50))\ninteractions_per_person_values = [1,2,3,4,5,6,7,8,9]\n\ncolors={1:'purple', 2:'red', 3:'orange', 4:'green', 5:'blue'}\n\n\n\nsubplot_val = 1\nnrows = 9\nncols = 2\n\ndays = list(range(time_horizon))\n\nfor interactions_pp in interactions_per_person_values:\n color = colors[(interactions_pp-1) % 5 + 1]\n plt.subplot(nrows, ncols, subplot_val)\n subplot_val += 1\n \n plt.title(\"Without Contact Tracing; lambda = {}; R0 = {:.2f}\".format(interactions_pp, R0[interactions_pp]))\n plt.ylim(-100,1100)\n \n for sim in sim_results_notrace[interactions_pp]:\n add_plot(sim, days, color)\n \n plt.subplot(nrows, ncols, subplot_val)\n subplot_val += 1\n \n plt.title(\"With Contact Tracing; lambda = {}; R0 = {:.2f}\".format(interactions_pp, R0[interactions_pp]))\n plt.ylim(-100,1100)\n \n for sim in sim_results_trace[interactions_pp]:\n add_plot(sim, days, color)\n \nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5a82212eac21b10b6c0891555fb1a9cf4831ce | 439,966 | ipynb | Jupyter Notebook | thesis/_build/html/_sources/treatment.ipynb | ASSahasranamam/thesis | c8d40e822cbe207eabda33748b2c0a25bc921046 | [
"MIT"
] | 1 | 2021-03-03T05:44:47.000Z | 2021-03-03T05:44:47.000Z | thesis/_build/html/_sources/treatment.ipynb | ASSahasranamam/thesis | c8d40e822cbe207eabda33748b2c0a25bc921046 | [
"MIT"
] | null | null | null | thesis/_build/html/_sources/treatment.ipynb | ASSahasranamam/thesis | c8d40e822cbe207eabda33748b2c0a25bc921046 | [
"MIT"
] | null | null | null | 284.399483 | 38,108 | 0.896831 | [
[
[
"\n# Statistics & Data Analysis\n",
"_____no_output_____"
],
[
"## Req",
"_____no_output_____"
],
[
"#### Import Requirements",
"_____no_output_____"
],
[
"##### HTML formatting",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\n\nHTML(\"\"\"<style type=\"text/css\">\n table.dataframe td, table.dataframe th {\n max-width: none;\n</style>\n\"\"\")\n\n\nHTML(\"\"\"<style type=\"text/css\">\n table.dataframe td, table.dataframe th {\n max-width: none;\n white-space: normal;\n }\n</style>\n\"\"\")\n\n\nHTML(\"\"\"<style type=\"text/css\">\n table.dataframe td, table.dataframe th {\n max-width: none;\n white-space: normal;\n line-height: normal;\n }\n</style>\n\"\"\")\n\n\nHTML(\"\"\"<style type=\"text/css\">\n table.dataframe td, table.dataframe th {\n max-width: none;\n white-space: normal;\n line-height: normal;\n padding: 0.3em 0.5em;\n }\n</style>\n\"\"\")",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport scipy\nimport matplotlib.pyplot as plt\nfrom pandas.api.types import CategoricalDtype\nfrom plotnine import *\nfrom scipy.stats import *\nimport scikit_posthocs as sp\n\n\n\ndata = pd.read_csv(\"./NewCols.csv\")\n\n",
"_____no_output_____"
]
],
[
[
"## Calculating the differences between the noremalized values. ",
"_____no_output_____"
]
],
[
[
"data_control = data[data[\"treatment\"] == \"baseline\"]\ndata_control.to_csv(\"./control.csv\")\ndata_treatment = data[data[\"treatment\"] == \"intravenous LPS\"]\ndata_control.to_csv(\"./lps.csv\")\n\nprocData = data_treatment\n\n\nprocData['diff_AVAR2'] = (\n np.array(data_treatment[\"AVAR2\"]) - np.array(data_control[\"AVAR2\"])).tolist()\nprocData[\"diff_CVAR2\"] = (\n np.array(data_treatment[\"CVAR2\"]) - np.array(data_control[\"CVAR2\"])).tolist()\nprocData[\"diff_AWT2\"] = (np.array(data_treatment[\"AWT2\"]) -\n np.array(data_control[\"AWT2\"])).tolist()\nprocData[\"diff_CWT2\"] = (np.array(data_treatment[\"CWT2\"]) -\n np.array(data_control[\"CWT2\"])).tolist()\n\n\nprocData[\"diff_total2\"] = (\n np.array(data_treatment[\"total2\"]) - np.array(data_control[\"total2\"])).tolist()\nprocData[\"diff_totalA\"] = (\n np.array(data_treatment[\"totalA\"]) - np.array(data_control[\"totalA\"])).tolist()\nprocData[\"diff_totalC\"] = (\n np.array(data_treatment[\"totalC\"]) - np.array(data_control[\"totalC\"])).tolist()\nprocData[\"diff_totalWT\"] = (np.array(\n data_treatment[\"totalWT\"]) - np.array(data_control[\"totalWT\"])).tolist()\nprocData[\"diff_totalVar\"] = (np.array(\n data_treatment[\"totalVar\"]) - np.array(data_control[\"totalVar\"])).tolist()\n\nprocData.to_csv(\"./procData.csv\")",
"<ipython-input-446-f984afccbdb6>:9: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:11: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:13: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:15: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:19: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:21: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:23: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:25: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n<ipython-input-446-f984afccbdb6>:27: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n"
],
[
"newDF= data_control[[\"testGroup\",\"tg2\"]]\nnewDF\n",
"_____no_output_____"
],
[
"newDF.rename(columns = {'testGroup':'c_tg','tg2':'c_tg2'}, inplace=True) \nnewDF\n",
"/usr/local/lib/python3.9/site-packages/pandas/core/frame.py:4441: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n"
],
[
"newDF.index = procData.index\nprocData= pd.concat([procData,newDF], axis=1)\n",
"_____no_output_____"
]
],
[
[
"#### Difference Table\n",
"_____no_output_____"
]
],
[
[
"\npd.set_option('display.max_rows', procData.shape[0]+1)\n\ndiff_data = procData.loc[ :,\"diff_AVAR2\":\"diff_totalVar\" ]\ndiff_data.to_csv(\"./diffData.csv\")",
"_____no_output_____"
],
[
"diff_data.describe()",
"_____no_output_____"
],
[
"diff_data.var()\n",
"_____no_output_____"
],
[
"diff_data.std()\n",
"_____no_output_____"
],
[
"diff_data.skew()\n",
"_____no_output_____"
],
[
"diff_data.kurtosis().tolist()",
"_____no_output_____"
],
[
"diff_data.kurtosis()",
"_____no_output_____"
]
],
[
[
"## Graph Data - ",
"_____no_output_____"
]
],
[
[
"from plotnine import *\nggplot(data, aes(x='treatment', y='AWT2') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(data_control[\"AWT2\"],data_treatment[\"AWT2\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n"
],
[
"ggplot(data, aes(x='treatment', y='CWT2') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(data_control[\"CWT2\"],data_treatment[\"CWT2\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2981: UserWarning: Sample size too small for normal approximation.\n"
],
[
"ggplot(data, aes(x='treatment', y='AVAR2') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(data_control[\"AVAR2\"],data_treatment[\"AVAR2\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2981: UserWarning: Sample size too small for normal approximation.\n"
],
[
"ggplot(data, aes(x='treatment', y='CVAR2') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(data_control[\"CVAR2\"],data_treatment[\"CVAR2\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2981: UserWarning: Sample size too small for normal approximation.\n"
],
[
"removed_outliers = data.total2.between(data.total2.quantile(.05), data.total2.quantile(.95))\ndata_total= data[removed_outliers]\nggplot(data_total, aes(x='treatment',y=\"total2\" ), ) + geom_boxplot(outlier_shape = \"\") + geom_jitter(data_total,aes(y=\"total2\",colour='treatment',shape='treatment') ) + ggtitle(\"QQ Plot of IRAK-1 expression per GbP\") + xlab(\"Treatment\") + ylab(\"Total IRAK-1 Levels per Gigabase pair\") + ylim(data_total.total2.quantile(.05), data_total.total2.quantile(.95))",
"/usr/local/lib/python3.9/site-packages/plotnine/layer.py:372: PlotnineWarning: stat_boxplot : Removed 4 rows containing non-finite values.\n/usr/local/lib/python3.9/site-packages/plotnine/layer.py:467: PlotnineWarning: geom_jitter : Removed 6 rows containing missing values.\n"
],
[
"a = 0.05\n\nwilcoxon(diff_data[\"diff_total2\"])\n",
"_____no_output_____"
],
[
"removed_outliers_diffData = diff_data.diff_total2.between(diff_data.diff_total2.quantile(.05), diff_data.diff_total2.quantile(.95))\ndifftotalData=diff_data[removed_outliers_diffData]\nggplot(difftotalData, aes( x='0',y='diff_total2') ) + geom_boxplot() + geom_point(color=\"red\") + ylim(difftotalData.diff_total2.quantile(.05), difftotalData.diff_total2.quantile(.95)) + ggtitle(\"QQ Plot of changes in IRAK-1 levels per Gbp\") + xlab(\"Treatment\") + ylab(\"Changes in IRAK-1 Levels per Gigabase pair\") \n",
"/usr/local/lib/python3.9/site-packages/plotnine/layer.py:372: PlotnineWarning: stat_boxplot : Removed 2 rows containing non-finite values.\n/usr/local/lib/python3.9/site-packages/plotnine/layer.py:467: PlotnineWarning: geom_point : Removed 2 rows containing missing values.\n"
],
[
"data_plot = data_treatment\ncontrolData = data_control['total2']\ncontrolData",
"_____no_output_____"
],
[
"data_plot[\"ctrl_total2\"]=controlData.to_list()\ndata_plot\n\n",
"<ipython-input-469-e65ed6e0bbb2>:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n"
],
[
"from sklearn.linear_model import LinearRegression\nmodel = LinearRegression().fit(data_plot.total2.to_numpy().reshape((-1, 1)), data_plot.ctrl_total2)\nr_sq= model.score(data_plot.total2.to_numpy().reshape((-1, 1)), data_plot.ctrl_total2)\nprint('coefficient of determination:', r_sq)\nprint('intercept:', model.intercept_)\nprint('slope:', model.coef_)",
"coefficient of determination: 0.7102999411988566\nintercept: 1.470657110816588\nslope: [0.95817383]\n"
],
[
"\nggplot(data_plot,aes(x='total2',y='ctrl_total2') ) + geom_point() + geom_smooth(method='lm')\n",
"_____no_output_____"
],
[
"\nfrom sklearn import linear_model\nlm = linear_model.LinearRegression()\n",
"_____no_output_____"
],
[
"shapiro_test = shapiro(data_control['total2'])\nshapiro_test",
"_____no_output_____"
],
[
"shapiro_test = shapiro(data_treatment['total2'])\nshapiro_test",
"_____no_output_____"
],
[
"shapiro_test = shapiro(diff_data['diff_total2'])\nshapiro_test",
"_____no_output_____"
],
[
"ggplot(data, aes(x='treatment', y='totalVar') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(diff_data[\"diff_totalVar\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2981: UserWarning: Sample size too small for normal approximation.\n"
],
[
"ggplot(data, aes(x='treatment', y='totalWT') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(diff_data[\"diff_totalWT\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n"
],
[
"ggplot(data, aes(x='treatment', y='totalA') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(diff_data[\"diff_totalA\"])\n",
"_____no_output_____"
],
[
"ggplot(data, aes(x='treatment', y='totalC') ) + geom_boxplot() + geom_jitter(data,aes(colour='treatment',shape='treatment'))\n",
"_____no_output_____"
],
[
"a = 0.05\n\nwilcoxon(diff_data[\"diff_totalC\"])\n",
"/usr/local/lib/python3.9/site-packages/scipy/stats/morestats.py:2967: UserWarning: Exact p-value calculation does not work if there are ties. Switching to normal approximation.\n"
]
],
[
[
"## Statistics",
"_____no_output_____"
],
[
"### Total 2 Comparison",
"_____no_output_____"
],
[
"#### Wilcoxon non-parametric",
"_____no_output_____"
]
],
[
[
"a = 0.05\n\nw, p = wilcoxon(data_control[\"total2\"],data_treatment[\"total2\"])\nprint(w, p)",
"13.0 0.00537109375\n"
],
[
"if (p < a):\n print(\"As P\"+str(p)+\" is less than a: \"+str(a))\n print( \"we reject the Null Hypothesis.\")\n print(\". There is significant difference betwween the groups\")\nelse: \n print(\"As P\"+p+\" is larger than a: \"+str(a))\n print( \"we FAIL TO reject the Null Hypothesis.\")\n print(\". There is NOT a significant difference betwween the groups\")",
"As P0.00537109375 is less than a: 0.05\nwe reject the Null Hypothesis.\n. There is significant difference betwween the groups\n"
]
],
[
[
"#### Freidman's Anova",
"_____no_output_____"
]
],
[
[
"sp.posthoc_nemenyi_friedman(diff_data)",
"_____no_output_____"
]
],
[
[
"Friedman Tes ",
"_____no_output_____"
],
[
"### other",
"_____no_output_____"
]
],
[
[
"a = 0.05\n\nw, p = wilcoxon((data_control[\"totalA\"]/data_control[\"totalC\"] ),(data_treatment[\"totalA\"]/data_treatment[\"totalC\"]))\nprint(w, p)",
"48.0 0.52447509765625\n"
],
[
"a = 0.05\n\nw, p = wilcoxon((data_control[\"AVAR2\"]/data_control[\"CVAR2\"] ),(data_treatment[\"AVAR2\"]/data_treatment[\"CVAR2\"]))\nprint(w, p)",
"11.0 0.00335693359375\n"
],
[
"a = 0.05\n\nw, p = wilcoxon((data_control[\"AWT2\"]/data_control[\"CWT2\"] ),(data_treatment[\"AWT2\"]/data_treatment[\"CWT2\"]))\nprint(w, p)",
"19.0 0.05535888671875\n"
],
[
"ggplot()+geom_histogram(procData,aes(x=\"tg2\"))",
"/usr/local/lib/python3.9/site-packages/plotnine/stats/stat_bin.py:93: PlotnineWarning: 'stat_bin()' using 'bins = 3'. Pick better value with 'binwidth'.\n"
],
[
"ggplot()+geom_histogram(procData,aes(x=\"mutant\"))",
"/usr/local/lib/python3.9/site-packages/plotnine/stats/stat_bin.py:93: PlotnineWarning: 'stat_bin()' using 'bins = 3'. Pick better value with 'binwidth'.\n"
],
[
"ggplot()+geom_bar(procData,aes(x=\"spliceVariant\",fill=\"mutant\"))",
"_____no_output_____"
],
[
"ggplot()+geom_col(procData,aes(x=\"spliceVariant\",y=\"diff_totalA/diff_totalC\",fill=\"mutant\"))",
"/usr/local/lib/python3.9/site-packages/plotnine/layer.py:467: PlotnineWarning: geom_col : Removed 9 rows containing missing values.\n"
],
[
"a = 0.05\ndiff_data = procData[(data[\"totalC\"] > 0 ) & (data[\"totalA\"] > 0 )]\nggplot()+geom_histogram(diff_data,aes(x=\"tg2\"))",
"<ipython-input-494-7ecc4e451223>:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n/usr/local/lib/python3.9/site-packages/plotnine/stats/stat_bin.py:93: PlotnineWarning: 'stat_bin()' using 'bins = 3'. Pick better value with 'binwidth'.\n"
],
[
"\nw, p = wilcoxon((diff_data[\"totalC\"] )/(diff_data[\"totalA\"]))\nprint(w, p)",
"0.0 0.0009765625\n"
],
[
"a = 0.05\n\nw, p = wilcoxon(data_control[\"total2\"],data_treatment[\"total2\"])\nprint(w, p)",
"13.0 0.00537109375\n"
]
],
[
[
"2 graphs \n\n1. Do the Table\n3. Black and white\n3. Make sure its not sloppy\n4. \n\ncontrol, LPS & Difference.\n\ncorrelation plot for each patient - total 2 & diff_total2\n\nLook for A/C ratios \n\nggplot(data_plot,aes(x='total2',y='ctrl_total2') ) + geom_point(colour) + geom_smooth(method='lm')\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
cb5a9212387d1a7d5a98aa421375fab0e9468566 | 15,021 | ipynb | Jupyter Notebook | notebooks/Time_Series/Basic Time Series Plotting.ipynb | julienchastang/unidata-python-workshop | be132446aac7a902afadaafedd14c2378ed4f559 | [
"MIT"
] | null | null | null | notebooks/Time_Series/Basic Time Series Plotting.ipynb | julienchastang/unidata-python-workshop | be132446aac7a902afadaafedd14c2378ed4f559 | [
"MIT"
] | null | null | null | notebooks/Time_Series/Basic Time Series Plotting.ipynb | julienchastang/unidata-python-workshop | be132446aac7a902afadaafedd14c2378ed4f559 | [
"MIT"
] | null | null | null | 28.998069 | 414 | 0.571267 | [
[
[
"<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Basic Time Series Plotting</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://matplotlib.org/_images/date_demo.png\" alt=\"METAR\" style=\"height: 300px;\"></div>\n\n\n## Overview:\n\n* **Teaching:** 45 minutes\n* **Exercises:** 30 minutes\n\n### Questions\n1. How can we obtain buoy data from the NDBC?\n1. How are plots created in Python?\n1. What features does Matplotlib have for improving our time series plots?\n1. How can multiple y-axes be used in a single plot?\n\n### Objectives\n1. <a href=\"#loaddata\">Obtaining data</a>\n1. <a href=\"#basictimeseries\">Basic timeseries plotting</a>\n1. <a href=\"#multiy\">Multiple y-axes</a>",
"_____no_output_____"
],
[
"<a name=\"loaddata\"></a>\n## Obtaining Data\nTo learn about time series analysis, we first need to find some data and get it into Python. In this case we're going to use data from the [National Data Buoy Center](http://www.ndbc.noaa.gov). We'll use the [pandas](http://pandas.pydata.org) library for our data subset and manipulation operations after obtaining the data with siphon. \n\nEach buoy has many types of data availabe, you can read all about it in the [NDBC Web Data Guide](https://www.ndbc.noaa.gov/docs/ndbc_web_data_guide.pdf). There is a mechanism in siphon to see which data types are available for a given buoy.",
"_____no_output_____"
]
],
[
[
"from siphon.simplewebservice.ndbc import NDBC\n\ndata_types = NDBC.buoy_data_types('46042')\nprint(data_types)",
"_____no_output_____"
]
],
[
[
"In this case, we'll just stick with the standard meteorological data. The \"realtime\" data from NDBC contains approximately 45 days of data from each buoy. We'll retreive that record for buoy 51002 and then do some cleaning of the data. ",
"_____no_output_____"
]
],
[
[
"df = NDBC.realtime_observations('46042')",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
]
],
[
[
"Let's get rid of the columns with all missing data. We could use the `drop` method and manually name all of the columns, but that would require us to know which are all `NaN` and that sounds like manual labor - something that programmers hate. Pandas has the `dropna` method that allows us to drop rows or columns where any or all values are `NaN`. In this case, let's drop all columns with all `NaN` values.",
"_____no_output_____"
]
],
[
[
"df = df.dropna(axis='columns', how='all')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Use the realtime_observations method to retreive supplemental data for buoy 41002. **Note** assign the data to something other that df or you'll have to rerun the data download cell above. We suggest using the name supl_obs.</li>\n </ul>\n</div>",
"_____no_output_____"
]
],
[
[
"# Your code goes here\n# supl_obs =",
"_____no_output_____"
]
],
[
[
"#### Solution",
"_____no_output_____"
]
],
[
[
"# %load solutions/get_obs.py",
"_____no_output_____"
]
],
[
[
"Finally, we need to trim down the data. The file contains 45 days worth of observations. Let's look at the last week's worth of data.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nidx = df.time >= (pd.Timestamp.utcnow() - pd.Timedelta(days=7))\ndf = df[idx]\ndf.head()",
"_____no_output_____"
]
],
[
[
"We're almost ready, but now the index column is not that meaningful. It starts at a non-zero row, which is fine with our initial file, but let's re-zero the index so we have a nice clean data frame to start with.",
"_____no_output_____"
]
],
[
[
"df.reset_index(drop=True, inplace=True)\ndf.head()",
"_____no_output_____"
]
],
[
[
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">",
"_____no_output_____"
],
[
"<a name=\"basictimeseries\"></a>\n## Basic Timeseries Plotting\n\nMatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We're going to learn the basics of creating timeseries plots with matplotlib by plotting buoy wind, gust, temperature, and pressure data.",
"_____no_output_____"
]
],
[
[
"# Convention for import of the pyplot interface\nimport matplotlib.pyplot as plt\n\n# Set-up to have matplotlib use its support for notebook inline plots\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"We'll start by plotting the windspeed observations from the buoy.",
"_____no_output_____"
]
],
[
[
"plt.rc('font', size=12)\nfig, ax = plt.subplots(figsize=(10, 6))\n\n# Specify how our lines should look\nax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')\n\n# Same as above\nax.set_xlabel('Time')\nax.set_ylabel('Speed (m/s)')\nax.set_title('Buoy Wind Data')\nax.grid(True)\nax.legend(loc='upper left');",
"_____no_output_____"
]
],
[
[
"Our x axis labels look a little crowded - let's try only labeling each day in our time series.",
"_____no_output_____"
]
],
[
[
"# Helpers to format and locate ticks for dates\nfrom matplotlib.dates import DateFormatter, DayLocator\n\n# Set the x-axis to do major ticks on the days and label them like '07/20'\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%m/%d'))\n\nfig",
"_____no_output_____"
]
],
[
[
"Now we can add wind gust speeds to the same plot as a dashed yellow line.",
"_____no_output_____"
]
],
[
[
"# Use linestyle keyword to style our plot\nax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--',\n label='Wind Gust')\n# Redisplay the legend to show our new wind gust line\nax.legend(loc='upper left')\n\nfig",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Create your own figure and axes (<code>myfig, myax = plt.subplots(figsize=(10, 6))</code>) which plots temperature.</li>\n <li>Change the x-axis major tick labels to display the shortened month and date (i.e. 'Sep DD' where DD is the day number). Look at the\n <a href=\"https://docs.python.org/3.6/library/datetime.html#strftime-and-strptime-behavior\">\n table of formatters</a> for help.\n <li>Make sure you include a legend and labels!</li>\n <li><b>BONUS:</b> try changing the <code>linestyle</code>, e.g., a blue dashed line.</li>\n </ul>\n</div>",
"_____no_output_____"
]
],
[
[
"# Your code goes here\n",
"_____no_output_____"
]
],
[
[
"#### Solution\n<div class=\"alert alert-info\">\n <b>Tip</b>:\n If your figure goes sideways as you try multiple things, try running the notebook up to this point again\n by using the Cell -> Run All Above option in the menu bar.\n</div>",
"_____no_output_____"
]
],
[
[
"# %load solutions/basic_plot.py",
"_____no_output_____"
]
],
[
[
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">",
"_____no_output_____"
],
[
"<a name=\"multiy\"></a>\n## Multiple y-axes\nWhat if we wanted to plot another variable in vastly different units on our plot? <br/>\nLet's return to our wind data plot and add pressure.",
"_____no_output_____"
]
],
[
[
"# plot pressure data on same figure\nax.plot(df.time, df.pressure, color='black', label='Pressure')\nax.set_ylabel('Pressure')\n\nax.legend(loc='upper left')\n\nfig",
"_____no_output_____"
]
],
[
[
"That is less than ideal. We can't see detail in the data profiles! We can create a twin of the x-axis and have a secondary y-axis on the right side of the plot. We'll create a totally new figure here.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(10, 6))\naxb = ax.twinx()\n\n# Same as above\nax.set_xlabel('Time')\nax.set_ylabel('Speed (m/s)')\nax.set_title('Buoy Data')\nax.grid(True)\n\n# Plotting on the first y-axis\nax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')\nax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust')\nax.legend(loc='upper left');\n\n# Plotting on the second y-axis\naxb.set_ylabel('Pressure (hPa)')\naxb.plot(df.time, df.pressure, color='black', label='pressure')\n\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%b %d'))\n",
"_____no_output_____"
]
],
[
[
"We're closer, but the data are plotting over the legend and not included in the legend. That's because the legend is associated with our primary y-axis. We need to append that data from the second y-axis.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(10, 6))\naxb = ax.twinx()\n\n# Same as above\nax.set_xlabel('Time')\nax.set_ylabel('Speed (m/s)')\nax.set_title('Buoy 41056 Wind Data')\nax.grid(True)\n\n# Plotting on the first y-axis\nax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')\nax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust')\n\n# Plotting on the second y-axis\naxb.set_ylabel('Pressure (hPa)')\naxb.plot(df.time, df.pressure, color='black', label='pressure')\n\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%b %d'))\n\n# Handling of getting lines and labels from all axes for a single legend\nlines, labels = ax.get_legend_handles_labels()\nlines2, labels2 = axb.get_legend_handles_labels()\naxb.legend(lines + lines2, labels + labels2, loc='upper left');",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n Create your own plot that has the following elements:\n <ul>\n <li>A blue line representing the wave height measurements.</li>\n <li>A green line representing wind speed on a secondary y-axis</li>\n <li>Proper labels/title.</li>\n <li>**Bonus**: Make the wave height data plot as points only with no line. Look at the documentation for the linestyle and marker arguments.</li>\n </ul>\n</div>",
"_____no_output_____"
]
],
[
[
"# Your code goes here\n",
"_____no_output_____"
]
],
[
[
"#### Solution",
"_____no_output_____"
]
],
[
[
"# %load solutions/adv_plot.py",
"_____no_output_____"
]
],
[
[
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5a9aeb2c428c8fe1d75086e4cc10305dcf58d2 | 6,812 | ipynb | Jupyter Notebook | notebooks/07_01_bidirectional_rnn.ipynb | titu1994/tf-eager-examples | c95a02a96fab794331afa49a1d0ce684fb3340b8 | [
"MIT"
] | 88 | 2018-05-18T05:43:31.000Z | 2021-08-05T01:47:21.000Z | notebooks/07_01_bidirectional_rnn.ipynb | titu1994/tf-eager-examples | c95a02a96fab794331afa49a1d0ce684fb3340b8 | [
"MIT"
] | 3 | 2018-05-20T02:32:08.000Z | 2019-11-11T02:52:26.000Z | notebooks/07_01_bidirectional_rnn.ipynb | titu1994/tf-eager-examples | c95a02a96fab794331afa49a1d0ce684fb3340b8 | [
"MIT"
] | 13 | 2018-05-27T17:43:05.000Z | 2021-08-05T01:55:49.000Z | 33.392157 | 297 | 0.567675 | [
[
[
"import os\nimport numpy as np\n\nimport tensorflow as tf\nfrom tensorflow.python.keras.datasets import mnist\nfrom tensorflow.contrib.eager.python import tfe",
"D:\\Users\\Yue\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
],
[
"# enable eager mode\ntf.enable_eager_execution()\ntf.set_random_seed(0)\nnp.random.seed(0)",
"_____no_output_____"
],
[
"if not os.path.exists('weights/'):\n os.makedirs('weights/')\n\n# constants\nunits = 64\nbatch_size = 256\nepochs = 2\nnum_classes = 10",
"_____no_output_____"
],
[
"# dataset loading\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train = x_train.astype('float32') / 255.\nx_test = x_test.astype('float32') / 255.\nx_train = x_train.reshape((-1, 28, 28)) # 28 timesteps, 28 inputs / timestep\nx_test = x_test.reshape((-1, 28, 28)) # 28 timesteps, 28 inputs / timeste\n\n# one hot encode the labels. convert back to numpy as we cannot use a combination of numpy\n# and tensors as input to keras\ny_train_ohe = tf.one_hot(y_train, depth=num_classes).numpy()\ny_test_ohe = tf.one_hot(y_test, depth=num_classes).numpy()\n\nprint('x train', x_train.shape)\nprint('y train', y_train_ohe.shape)\nprint('x test', x_test.shape)\nprint('y test', y_test_ohe.shape)",
"x train (60000, 28, 28)\ny train (60000, 10)\nx test (10000, 28, 28)\ny test (10000, 10)\n"
]
],
[
[
"# Bi-Directional LSTM\n\nWriting a Bi-directional LSTM in keras is super simple with the Bidirectional wrapper. However the speed of such a model is slower than expected.\n\nSome fixes for it are to use the GPU implementation for all the cells, and to unroll the entire RNN before hand. In normal Keras and Tensorflow, unrolling the RNN yields significant speed improvements since the symbolic loop is replaced with the unrolled graph representation of the RNN. \n\nIn Eager, I don't believe it is doing much to help with the speed.",
"_____no_output_____"
]
],
[
[
"class BiRNN(tf.keras.Model):\n def __init__(self, units, num_classes, merge_mode='concat', num_layers=1):\n super(BiRNN, self).__init__()\n self.impl = 1 if tfe.num_gpus() == 0 else 2\n self.cells = [tf.keras.layers.LSTMCell(units, implementation=self.impl) for _ in range(num_layers)]\n self.rnn = tf.keras.layers.RNN(self.cells, unroll=True) # slower if not unrolled - probably because it is using K.rnn() internally.\n self.bidirectional = tf.keras.layers.Bidirectional(self.rnn, merge_mode=merge_mode)\n self.classifier = tf.keras.layers.Dense(num_classes)\n\n def call(self, inputs, training=None, mask=None):\n x = self.bidirectional(inputs)\n output = self.classifier(x)\n\n # softmax op does not exist on the gpu, so always use cpu\n with tf.device('/cpu:0'):\n output = tf.nn.softmax(output)\n\n return output",
"_____no_output_____"
],
[
"device = '/cpu:0' if tfe.num_gpus() == 0 else '/gpu:0'\n\nwith tf.device(device):\n # build model and optimizer\n model = BiRNN(units, num_classes, num_layers=2)\n model.compile(optimizer=tf.train.AdamOptimizer(0.01), loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n # TF Keras tries to use entire dataset to determine shape without this step when using .fit()\n # Fix = Use exactly one sample from the provided input dataset to determine input/output shape/s for the model\n dummy_x = tf.zeros((1, 28, 28))\n model._set_inputs(dummy_x)\n\n # train\n model.fit(x_train, y_train_ohe, batch_size=batch_size, epochs=epochs,\n validation_data=(x_test, y_test_ohe), verbose=1)\n\n # evaluate on test set\n scores = model.evaluate(x_test, y_test_ohe, batch_size, verbose=1)\n print(\"Final test loss and accuracy :\", scores)\n\n saver = tfe.Saver(model.variables)\n saver.save('weights/07_01_bi_rnn/weights.ckpt')",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/2\n60000/60000 [==============================] - 275s 5ms/step - loss: 0.3881 - acc: 0.8740 - val_loss: 0.1089 - val_acc: 0.9657\nEpoch 2/2\n60000/60000 [==============================] - 260s 4ms/step - loss: 0.0927 - acc: 0.9721 - val_loss: 0.0747 - val_acc: 0.9748\n10000/10000 [==============================] - 13s 1ms/step\nFinal test loss and accuracy : [0.07469581732600927, 0.9748]\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb5aac5d03124a6a929312ef8343d5774c72caed | 140,037 | ipynb | Jupyter Notebook | notebooks/Tools/full_model_timeseries.ipynb | SalishSeaCast/analysis-keegan | 64eb44809a6581c210d02087c11b92a382945529 | [
"Apache-2.0"
] | null | null | null | notebooks/Tools/full_model_timeseries.ipynb | SalishSeaCast/analysis-keegan | 64eb44809a6581c210d02087c11b92a382945529 | [
"Apache-2.0"
] | null | null | null | notebooks/Tools/full_model_timeseries.ipynb | SalishSeaCast/analysis-keegan | 64eb44809a6581c210d02087c11b92a382945529 | [
"Apache-2.0"
] | null | null | null | 281.764588 | 116,360 | 0.902383 | [
[
[
"This notebook contains a prototype for a workflow that would allow you to compare observations that were sampled in dicrete time to the model output in continuous time. Only the first 14 cells work, and even then they are so unbelievably slow as to be almost entirely useless. ",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')",
"_____no_output_____"
],
[
"import numpy as np\nimport numpy.polynomial.polynomial as poly\nimport matplotlib.pyplot as plt\nimport os\nimport math\nimport pandas as pd\nfrom erddapy import ERDDAP\nimport netCDF4 as nc\nimport datetime as dt\nfrom salishsea_tools import evaltools as et, viz_tools, places\nimport gsw \nimport matplotlib.gridspec as gridspec\nimport matplotlib as mpl\nimport matplotlib.dates as mdates\nimport cmocean as cmo\nimport scipy.interpolate as sinterp\nimport pickle\nimport cmocean\nimport json\nimport f90nml\nimport xarray as xr\nimport datetime as dt\nimport Keegan_eval_tools as ket\nfrom collections import OrderedDict\n\nfs=16\nmpl.rc('xtick', labelsize=fs)\nmpl.rc('ytick', labelsize=fs)\nmpl.rc('legend', fontsize=fs)\nmpl.rc('axes', titlesize=fs)\nmpl.rc('axes', labelsize=fs)\nmpl.rc('figure', titlesize=fs)\nmpl.rc('font', size=fs)\nmpl.rc('font', family='sans-serif', weight='normal', style='normal')\n\nimport warnings\n#warnings.filterwarnings('ignore')\nfrom IPython.display import Markdown, display\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\n\nHTML('''<script>\ncode_show=true; \nfunction code_toggle() {\n if (code_show){\n $('div.input').hide();\n } else {\n $('div.input').show();\n }\n code_show = !code_show\n} \n$( document ).ready(code_toggle);\n</script>\n\n<form action=\"javascript:code_toggle()\"><input type=\"submit\" value=\"Click here to toggle on/off the raw code.\"></form>''')\n",
"_____no_output_____"
]
],
[
[
"year=2010\nmodelversion='nowcast-green.201905'\nPATH= '/results2/SalishSea/nowcast-green.201905/'\ndatadir='/ocean/eolson/MEOPAR/obs/WADE/ptools_data/ecology'",
"_____no_output_____"
],
[
"##### Loading in pickle file data\nsaveloc='/ocean/kflanaga/MEOPAR/savedData/WADE_nutribot_pickles'\n\nwith open(os.path.join(saveloc,f'data_WADE_{modelversion}_{year}.pkl'),'rb') as hh:\n data=pickle.load(hh)",
"_____no_output_____"
],
[
"#creating new dictionaries that make it easy to call on specific years.\ndatstat=dict()\nfor ind, istation in enumerate(data.Station.unique()):\n datstat[istation]=data.loc[data.Station == istation]",
"_____no_output_____"
],
[
"%%time\nstart= dt.datetime(2010,1,1)\nend=dt.datetime(2010,12,31) # the code called below (evaltools.index_model_files) includes the end date \n # in the values returned\nbasedir='/results2/SalishSea/nowcast-green.201905/'\nnam_fmt='nowcast'\nflen=1 # files contain 1 day of data each\nftype= 'ptrc_T' # load bio files\ntres=24 # 1: hourly resolution; 24: daily resolution <- try changing to 1 and loading hourly data\nflist=et.index_model_files(start,end,basedir,nam_fmt,flen,ftype,tres)\n# flist contains paths: file pathes; t_0 timestemp of start of each file; t_n: timestamp of start of next file\n",
"CPU times: user 18.7 ms, sys: 16.4 ms, total: 35.1 ms\nWall time: 797 ms\n"
],
[
"# get model i,j of location S3 from places\nij,ii=places.PLACES['S3']['NEMO grid ji']\nik=2 # choose surface level",
"_____no_output_____"
],
[
"ii=data[data.Station == 'BUD005'].i.unique()[0]\nij=data[data.Station == 'BUD005'].j.unique()[0]\nik=2",
"_____no_output_____"
],
[
"bio=xr.open_mfdataset(flist['paths'])",
"_____no_output_____"
],
[
"%%time\ntt=bio.time_counter\nNO23=bio.nitrate.isel(deptht=ik,y=ij,x=ii) #.cell will give closest to two meters \n#this is where we have the depth problem. ",
"CPU times: user 2.43 ms, sys: 327 µs, total: 2.76 ms\nWall time: 2.76 ms\n"
],
[
"def TsByStation_ind2 (df,datstat,regions,obsvar,modvar,year,ylim,figsize=(14,40),loc='lower left',depth=5): \n stations=[]\n for r in regions:\n sta0=df[df['Basin']==r].Station.unique()\n stations.append(sta0)\n stations = [val for sublist in stations for val in sublist]\n fig,ax=plt.subplots(math.ceil(len(stations)/2),2,figsize=figsize)\n new_stat = [stations[i:i+2] for i in range(0, len(stations), 2)]\n for si,axi in zip(new_stat,ax):\n for sj,axj in zip(si,axi):\n #The creation of the observed data points\n ps=[]\n obs0=et._deframe(df.loc[(df['dtUTC'] >= dt.datetime(year,1,1))&(df['dtUTC']<= dt.datetime(year,12,31))&(df['Station']==sj)&(df['Z']<=depth),[obsvar]])\n time0=et._deframe(df.loc[(df['dtUTC'] >= dt.datetime(year,1,1))&(df['dtUTC']<= dt.datetime(year,12,31))&(df['Station']==sj)&(df['Z']<=depth),['dtUTC']])\n p0,=axj.plot(time0,obs0,'.',color='blue',label=f'Observed {obsvar}',marker='o',fillstyle='none')\n ps.append(p0)\n # The creation of the model data line \n ii=data[data.Station == sj].i.unique()[0]\n ij=data[data.Station == sj].j.unique()[0]\n ik=0\n tt=bio.time_counter\n NO23=bio[modvar].isel(deptht=ik,y=ij,x=ii)\n p0,=axj.plot(tt,NO23,'-',color='darkorange',label='Nitrate')\n ps.append(p0)\n #labeling and formatting\n axj.set_ylabel('Concentration ($\\mu$M)')\n axj.set_xlim(tt[0],tt[-1])\n axj.legend(handles=ps,prop={'size': 10},loc=loc)\n axj.set_xlabel(f'Date',fontsize=13)\n axj.set_ylabel(f'{obsvar} ($\\mu$M)',fontsize=13)\n axj.set_title(f'{df[df.Station==sj].Basin.unique()[0]} ({sj})', fontsize=13)\n axj.set_ylim(ylim)\n yearsFmt = mdates.DateFormatter('%d %b')\n axj.xaxis.set_major_formatter(yearsFmt)\n for tick in axj.xaxis.get_major_ticks():\n tick.label.set_fontsize(13)\n for tick in axj.yaxis.get_major_ticks():\n tick.label.set_fontsize(13)\n plt.tight_layout()\n plt.setp(axj.get_xticklabels(), rotation=30, horizontalalignment='right')",
"_____no_output_____"
],
[
"obsvar='NO23'\nmodvar='nitrate'\nregions=['Hood Canal Basin']\nlims=(0,40)\n\nTsByStation_ind2(data,datstat,regions,obsvar,modvar,year,lims,figsize=(14,14),loc='lower left')",
"_____no_output_____"
],
[
"bio.close()",
"_____no_output_____"
]
],
[
[
"Hmmm The fact that there are multiple different points at different depths make this technique mostly useless. Even if I fix it so that there are multiple lines or something, it will take so long it will be almost useless. Perhaps If I only look at observations at a certain depth it can be at least a little helpful. ",
"_____no_output_____"
]
],
[
[
"# Now we are actually loading everything from a website/ online database instead of from our own results storage.\nserver = \"https://salishsea.eos.ubc.ca/erddap\"\n\nprotocol = \"griddap\"\n\ndataset_id = \"ubcSSg3DBiologyFields1hV19-05\"\n\nresponse = \"nc\"\n\nvariables = [\n \"nitrate\",\n \"time\",\n]\n\nfourkmlat = 4/110.574\nfourkmlon = 4/(111.320*np.cos(50*np.pi/180.))\nlon, lat = places.PLACES['S3']['lon lat']\n\nconstraints = {\n \"time>=\": \"2015-02-01T00:00:00Z\",\n \"time<=\": \"2015-04-01T00:00:00Z\",\n}\n\nprint(constraints)",
"{'time>=': '2015-02-01T00:00:00Z', 'time<=': '2015-04-01T00:00:00Z'}\n"
],
[
"obs = ERDDAP(server=server, protocol=protocol,)\n\nobs.dataset_id = dataset_id\nobs.variables = variables\nobs.constraints = constraints",
"_____no_output_____"
],
[
"obs\nprint(obs.get_download_url())",
"https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DBiologyFields1hV19-05.html?nitrate,time&time>=1422748800.0&time<=1427846400.0\n"
],
[
"obs_pd = obs.to_pandas(index_col=\"time (UTC)\", parse_dates=True,).dropna()\nobs_pd",
"_____no_output_____"
],
[
"server = \"https://salishsea.eos.ubc.ca/erddap\"\n\nprotocol = \"tabledap\"\n\ndataset_id = \"ubcONCTWDP1mV18-01\"\n\nresponse = \"nc\"\n\nvariables = [\n \"latitude\",\n \"longitude\",\n \"chlorophyll\",\n \"time\",\n]\n\nfourkmlat = 4/110.574\nfourkmlon = 4/(111.320*np.cos(50*np.pi/180.))\nlon, lat = places.PLACES['S3']['lon lat']\n\nconstraints = {\n \"time>=\": \"2015-02-01T00:00:00Z\",\n \"time<=\": \"2015-04-01T00:00:00Z\",\n \"latitude>=\": lat - fourkmlat,\n \"latitude<=\": lat + fourkmlat,\n \"longitude>=\": lon - fourkmlon,\n \"longitude<=\": lon + fourkmlon,\n}\n\nprint(constraints)",
"_____no_output_____"
],
[
"obs = ERDDAP(server=server, protocol=protocol,)\n\nobs.dataset_id = dataset_id\nobs.variables = variables\nobs.constraints = constraints",
"_____no_output_____"
],
[
"obs_pd = obs.to_pandas(index_col=\"time (UTC)\", parse_dates=True,).dropna()",
"_____no_output_____"
],
[
"obs_pd",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"raw",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5ab89e78c0bd8bad28beee1cace65c3388c8a4 | 4,216 | ipynb | Jupyter Notebook | examples/hydrogen_orbitals.ipynb | pevers/K3D-jupyter | 36e194c7357b875bfc185e219add2342ea297cbf | [
"MIT"
] | 1 | 2021-07-07T13:34:51.000Z | 2021-07-07T13:34:51.000Z | examples/hydrogen_orbitals.ipynb | pevers/K3D-jupyter | 36e194c7357b875bfc185e219add2342ea297cbf | [
"MIT"
] | null | null | null | examples/hydrogen_orbitals.ipynb | pevers/K3D-jupyter | 36e194c7357b875bfc185e219add2342ea297cbf | [
"MIT"
] | null | null | null | 20.768473 | 109 | 0.480313 | [
[
[
"import k3d\nimport numpy as np\nimport scipy.special\nimport scipy.misc\n\nr = lambda x,y,z: np.sqrt(x**2+y**2+z**2)\ntheta = lambda x,y,z: np.arccos(z/r(x,y,z))\nphi = lambda x,y,z: np.arctan(y/x)\n\na0 = 1.\nR = lambda r,n,l: (2*r/n/a0)**l * np.exp(-r/n/a0) * scipy.special.genlaguerre(n-l-1,2*l+1)(2*r/n/a0)\nWF = lambda r,theta,phi,n,l,m: R(r,n,l) * scipy.special.sph_harm(m,l,phi,theta)\nabsWF = lambda r,theta,phi,n,l,m: abs(WF(r,theta,phi,n,l,m))**2\nN=100j\na = 200.0\nx,y,z = np.ogrid[-a:a:N,-a:a:N,-a:a:N]\nx = x.astype(np.float32)\ny = y.astype(np.float32)\nz = z.astype(np.float32)",
"_____no_output_____"
],
[
"orbital = absWF(r(x,y,z),theta(x,y,z),phi(x,y,z),1,0,0) # 1s",
"_____no_output_____"
],
[
"plot = k3d.plot()\nplot.display()",
"_____no_output_____"
],
[
"plot.grid_auto_fit = False",
"_____no_output_____"
],
[
"E = 10\nvolume_animation = {}\nlabel_animation = {}\ni = 0\n\nfor l in range(E):\n print(l, '/', E-1, end='\\r')\n for m in range(-l,l+1):\n psi2 = absWF(r(x, y, z), theta(x, y, z), phi(x, y, z), E, l, m)\n \n volume_animation[str(i)] = (psi2/np.max(psi2))\n label_animation[str(i)] = 'n=%d \\quad l=%d \\quad m=%d' % (E,l,m)\n \n i += 0.1",
"9 / 9\r"
],
[
"plot += k3d.text2d(label_animation, (0.,0.))",
"_____no_output_____"
],
[
"plot += k3d.volume(volume_animation, color_map=k3d.colormaps.basic_color_maps.CoolWarm, \n color_range=(0.0,0.1))",
"_____no_output_____"
],
[
"np.array(volume_animation).tolist()['0'].dtype",
"_____no_output_____"
],
[
"plot.colorbar_object_id = 0",
"_____no_output_____"
],
[
"plot.start_auto_play()",
"_____no_output_____"
],
[
"plot.stop_auto_play()",
"_____no_output_____"
],
[
"np.float32",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5abf84d47f2d8cd5b558ac40aa8bccf4604f22 | 37,157 | ipynb | Jupyter Notebook | Section_1_Implement_your_own_neuron_from_scratch.ipynb | adamuas/intuitive_intro_to_ann_ml | e9747d5403e9e3ed27800fa5df8748dc4545d2ce | [
"MIT"
] | null | null | null | Section_1_Implement_your_own_neuron_from_scratch.ipynb | adamuas/intuitive_intro_to_ann_ml | e9747d5403e9e3ed27800fa5df8748dc4545d2ce | [
"MIT"
] | null | null | null | Section_1_Implement_your_own_neuron_from_scratch.ipynb | adamuas/intuitive_intro_to_ann_ml | e9747d5403e9e3ed27800fa5df8748dc4545d2ce | [
"MIT"
] | null | null | null | 50.9 | 12,976 | 0.614339 | [
[
[
"<a href=\"https://colab.research.google.com/github/adamuas/intuitive_intro_to_ann_ml/blob/master/Section_1_Implement_your_own_neuron_from_scratch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# An Intuitive Introduction to Artificial Neural Networks and Machine Learning\n",
"_____no_output_____"
],
[
"This notebook walks you through the implementation of a neuron, its activation function (i.e. a function to model the excitation stage of the neuron) and its output function (i.e. a function to model the firing stage of the neuron).\n\n",
"_____no_output_____"
]
],
[
[
"# First off lets start by importing some useful python packages\nimport pandas as pd\nimport numpy as np\nfrom sklearn.datasets import load_digits, load_diabetes\nfrom matplotlib import pylab as plt\n\n\n# install tensorflow 2\nimport tensorflow as tf\n\n%matplotlib inline",
"_____no_output_____"
],
[
"print('tensorflow version:{}'.format(tf.__version__))",
"tensorflow version:1.15.0\n"
]
],
[
[
"## Section 1 - Implementing your own neuron",
"_____no_output_____"
],
[
"### Section 1.1 - The Neuron",
"_____no_output_____"
]
],
[
[
"class Neuron:\n \"\"\"\n A model of our artificla neuron.\n \"\"\"\n \n def __init__(self, n_inputs, bias=1.0, weight_fn=None, activation_fn=None):\n \"\"\"\n Constructor for our neuron\n \n params:\n n_inputs - number of input connections/weights\n bias - bias of the neuron\n activation_fn - activation function that models excitation\n weight_fn - input combination function\n \n \"\"\"\n # randomly initialize the weights (1 weight for each input)\n # Weights are supposed to represent the connection strength of the dendrites of our neuron.\n self.weights = np.random.randn(n_inputs)\n # bias of the neuron \n self.bias = bias\n # number of inputs the neuron recevies\n self.n_inputs = n_inputs\n # activation function of the neuron\n self.activation_fn = activation_fn \n # weight function of the neuron\n self.weight_fn = weight_fn \n \n def stimulate(self, inputs, verbose=False):\n \"\"\"\n Propagates the inputs through the weights and returns the output of the neuron\n \n params:\n inputs - inputs signals for the neuron\n \n returns:\n output value - output signal of the neuron\n \"\"\"\n \n \n # initialize our output value\n output_value= 0\n action_potential = 0\n \n # pass through activation function\n if self.weight_fn:\n action_potential = self.weight_fn(weights=self.weights, \n inputs=inputs)\n # add neuron's bias\n action_potential = action_potential + self.bias\n \n if verbose:\n print('Action Potential: {}'.format(action_potential))\n \n if self.activation_fn:\n output_value = self.activation_fn(action_potential=action_potential)\n \n if verbose:\n print('Output Value: {}'.format(output_value))\n \n return output_value\n \n def __repr__(self):\n \"\"\"\n Returns a string representation of our Artificial Neuron\n \n returns:\n String\n \"\"\"\n return \"<neuron>\\nweights: {}\\nbias: {}\\nactivation_fn: {}\\n weight_fn:{}\\n</neuron>\".format(self.weights,\n self.bias,\n self.activation_fn,\n self.weight_fn)\n ",
"_____no_output_____"
]
],
[
[
"### Section 1.2 - Weight Function (Model's Input Combination Function)\n\n",
"_____no_output_____"
]
],
[
[
"def weighted_sum(weights,inputs):\n \"\"\" \n Weighted sum activation function\n \n This is supposed to model the action excitation stage of the artificial neuron.\n It models how excited the neuron should be based on the inputs (stimulus)\n \n parameters:\n W - Weights (Weights of the neuron)\n I - inputs (features used to make the decision)\n \n returns:\n action_potentatial - the degree of activation for that particular neuron\n \n \"\"\"\n \n action_potential = 0\n \n for input_i, weight_i in zip(inputs, weights):\n action_potential += input_i * weight_i\n \n return action_potential\n",
"_____no_output_____"
]
],
[
[
"**Things to think about:**\n\n\n* Why use weighted sum?\n\n",
"_____no_output_____"
],
[
"### Section 1.2 - Activation Function (Model's firing stage)",
"_____no_output_____"
]
],
[
[
"def sigmoid(action_potential):\n \"\"\" \n Sigmoid output function\n \n This is supposed to model the firing stage of the neuron.\n It models how much the neuron should output based on the action potential \n generated from the excitation stage.\n \n return:\n returns the output value of the neuron\n \n \"\"\"\n return 1/(1 + np.exp(-1 * (action_potential)))",
"_____no_output_____"
]
],
[
[
"**Things to think about**:\n\n\n* Why a sigmoid activation function?\n\n",
"_____no_output_____"
],
[
"#### Behaviour of Sigmoids",
"_____no_output_____"
]
],
[
[
"X = range(15)\ny = [sigmoid(x) for x in X]\n\ndf = pd.DataFrame.from_dict({'X': X, 'y':y})\ndf.head()",
"_____no_output_____"
],
[
"df.plot(x='X',y='y')\nplt.title('Behaviour Of Sigmoids')",
"_____no_output_____"
]
],
[
[
"The Sigmoid output function allows us to squash the output values of neuron's while preserving their magnitudes. So with greater output values we get closer and closer to 1.0,and with smaller and smaller output values we get closer to 0.0",
"_____no_output_____"
],
[
"## (5) Stimulate Our Neuron \n\nWe will be stimulating our neuron with the Iris dataset. It is a simple dataset that has measures of the iris plant such as the petal length, and width along with the corresponding species of the iris plant. It is a gentle introduction to simple datasets.",
"_____no_output_____"
],
[
"### 5.1 - Load the Iris Dataset from Sklearn",
"_____no_output_____"
]
],
[
[
"# lets import the iris dataset.\nfrom sklearn.datasets import load_iris\n\n# load the iris dataset\ndata = load_iris()\n\nprint(data['DESCR'])",
".. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher's paper. Note that it's the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. \"The use of multiple measurements in taxonomic problems\"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n Mathematical Statistics\" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments\". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n"
]
],
[
[
"### 5.2 - Create our Neuron\n\nHere we will specify the number of input features as the number of attributes we have available in the Iris dataset because we will like to use those attributes to make predictions about the Iris plant.",
"_____no_output_____"
]
],
[
[
"# specify the number of inputs features we want the neruon to consider \nIRIS_FEATURES = 4\nneuron = Neuron(n_inputs=IRIS_FEATURES, weight_fn=weighted_sum, activation_fn=sigmoid)\nprint(neuron)",
"<neuron>\nweights: [ 0.31923544 -1.16380619 0.09427354 0.41858532]\nbias: 1.0\nactivation_fn: <function sigmoid at 0x7fdd88054bf8>\noutput_fn:<function weighted_sum at 0x7fddbf408bf8>\n</neuron>\n"
]
],
[
[
"### 5.3 - Stimulate the neuron with the iris Dataset\nHere we will use a few samples of the iris dataset to stimulate the neuron.",
"_____no_output_____"
]
],
[
[
"inputs = data['data']\ntargets = data['target']\n\nnum_samples = 5\n\nfor i in range(num_samples):\n input_i = inputs[i]\n target_i = targets[i]\n print('input: {}'.format(input_i))\n output_i = neuron.stimulate(input_i)\n print('output: {}'.format(output_i))",
"input: [5.1 3.5 1.4 0.2]\noutput: 0.22626529255090236\ninput: [4.9 3. 1.4 0.2]\noutput: 0.3292752353503124\ninput: [4.7 3.2 1.3 0.2]\noutput: 0.2655145465791381\ninput: [4.6 3.1 1.5 0.2]\noutput: 0.2861434485425675\ninput: [5. 3.6 1.4 0.2]\noutput: 0.20135853333206966\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb5abff592042ecbcfe824fda048e090bf830f48 | 43,093 | ipynb | Jupyter Notebook | .ipynb_checkpoints/S1_Introduction_to_version_control-checkpoint.ipynb | hphilamore/ILAS_python_ | f17b4f0aeaf0f8daeb943bcdb5544716c0a5b3f6 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/S1_Introduction_to_version_control-checkpoint.ipynb | hphilamore/ILAS_python_ | f17b4f0aeaf0f8daeb943bcdb5544716c0a5b3f6 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/S1_Introduction_to_version_control-checkpoint.ipynb | hphilamore/ILAS_python_ | f17b4f0aeaf0f8daeb943bcdb5544716c0a5b3f6 | [
"MIT"
] | null | null | null | 31.569963 | 334 | 0.584248 | [
[
[
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()",
"_____no_output_____"
]
],
[
[
"# Introduction to Version Control\n\n",
"_____no_output_____"
],
[
"This is an introductory guide to the basic functions of Git version control software and the GitHub code hosting site that we will use during the Introduction to Programming Python course. \n\nThe examples in this guide will show you how to:\n- ",
"_____no_output_____"
],
[
"## Git\n__What is Git?__ \n\nGit is *version control* software.",
"_____no_output_____"
],
[
"__What is version control software?__ \n\nSoftware that tracks and manages changes to project without overwriting any part of the project. ",
"_____no_output_____"
],
[
"Typically, when you save a file, for example a word document, you either:\n - overwrite the previous version (save)\n - save the file under a new name (save as)",
"_____no_output_____"
],
[
"This means we either:\n- Lose the previous version\n- End up with multiple files\n",
"_____no_output_____"
],
[
"In programming we often want to:\n - make a small change to our program \n - test our change works before moving on. \n - easily revert to a previous version if we don't like the changes\n\nIt makes sense to incrementally save our work as we go along. \n\nThat way, if we break something we can just go back to the previous version. ",
"_____no_output_____"
],
[
"But this can lead to many files:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/many_files_same_name.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nHow can we tell what each one does?",
"_____no_output_____"
],
[
"We could try giving them meaningful names:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/many_files.gif\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nBut the name can only tell us a little bit of information...\n",
"_____no_output_____"
],
[
"...before they start getting really long!\n\n<img src=\"../../../ILAS_seminars/intro to python/img/many_files_different_names.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nThings get very confusing!\n\nAnd many files take up lots of space on your computer. ",
"_____no_output_____"
],
[
"### How Git works\nGit creates a folder in the same directory as your file. \n\nThe directory containing both the file being tracked and the Git folder is now referred to as a repository or \"repo\". \n\n(The folder is hidden.)\n\nThe folder being tracked by git is referred to as a repository or \"repo\". \n\nYou can keep any type of file in a repository (code files, text files, image files....). \n\n",
"_____no_output_____"
],
[
"\n\nIt logs changes you make to the file.\n\nIt track of multiple files within this directory. \n\nIt stores a *commit message* with each change you *commit*, saying what you changed:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/git_commit_.png\" alt=\"Drawing\" style=\"width: 300px;\"/>",
"_____no_output_____"
],
[
"So if you make a mistake, you can just reset to a previous version.\n\n<img src=\"../../../ILAS_seminars/intro to python/img/git_reset.png\" alt=\"Drawing\" style=\"width: 300px;\"/>",
"_____no_output_____"
],
[
"When you commit chanegs, Git does not save two versions of the same file. \n\nGit only saves the __difference__ between two files.\n\nThis minimises the amount of space that tracking your changes takes up on your computer,\n\n__Example:__ Between files r3 and r4, the information saved is\n > -juice <br>\n > +soup\n\n<img src=\"../../../ILAS_seminars/intro to python/img/git_diff.png\" alt=\"Drawing\" style=\"width: 500px;\"/>",
"_____no_output_____"
],
[
"### Advantages and Disadventages of Git\n\nA __great thing__ about git is that it was made by programmers for programmers. \n\nProfessional developers and most other professionals who write code, use git (or other version control software) to manage their files, workflow and collaborations. \n\nIt has an enourmous range of functionality.\n",
"_____no_output_____"
],
[
"A __problem__ with so much freedom is that it can be easy to get things wrong.\n\nGit can be difficult to use.\n\n",
"_____no_output_____"
],
[
"To keep things nice and easy we will learn only the basics of using Git. \n\nEven this basic understanding will give you essential skills that are used every day by professional programmers. ",
"_____no_output_____"
],
[
"A __problem__ with Git is that it was made by programmers for programmers. \n\nWe have to use the command line (or Terminal) to access it. \n\nThere is no user interface.\n\nIt can be difficult to visualise what is going on. \n\n<img src=\"../../../ILAS_seminars/intro to python/img/git_command_line.png\" alt=\"Drawing\" style=\"width: 500px;\"/>",
"_____no_output_____"
],
[
"## GitHub\nTo provide a visual interface we can use an online *host site* to store and view code...\n\n",
"_____no_output_____"
],
[
"A repo can be a local folder on your computer. \n\nA repo can also be a storage space on GitHub or another online host site. \n\n<img src=\"../../../ILAS_seminars/intro to python/img/github-logo.jpg\" alt=\"Drawing\" style=\"width: 200px;\"/>\n",
"_____no_output_____"
],
[
"GitHub.com is a \"code hosting site\".\n\nIt provides a visual interface to view code, the changes (*commits*) and to share and collaborate with others. \n\nThere are many code hosting sites, however Github has a large community of users. \n\nSo for programmers, it works like a social media site like Facebook or instagram.\n\n<img src=\"../../../ILAS_seminars/intro to python/img/github-logo.jpg\" alt=\"Drawing\" style=\"width: 200px;\"/>\n\n",
"_____no_output_____"
],
[
"Let's start by downloading your interactive textbook from github.com\n\nOpen a web browser and go to:\n\nhttps://github.com/hphilamore/ILAS_python\n\nThis is a __repository__.\n\nIt is an online directory where this project, the textbook, is stored. ",
"_____no_output_____"
],
[
"We can look at previous versions of the code by selecting *commits*...\n\nWe can easily view the difference (\"diff\") between the previous and current version.",
"_____no_output_____"
],
[
" You are going to download a personal copy of the textbook to you user area. ",
"_____no_output_____"
],
[
"Please log on to the computer. \n\n\n\n",
"_____no_output_____"
],
[
"### Introduction to the Command Line. \n\nWe are going to download the textbook using the command line.\n\nTo open the terminal:\n - press \"win key\" + \"R\"\n - type: __cmd__\n - press enter",
"_____no_output_____"
],
[
"A terminal will launch.\n\nThe *command prompt* will say something like:\n\nC:¥Users¥Username:\n\nThe C tells us that we are on the C drive of the computer.\n\nLets switch to the M drive where the user (you!) can save files.",
"_____no_output_____"
],
[
"In the terminal type: \n\n>`M:`\n\n...and press enter.\n",
"_____no_output_____"
],
[
"You should see the command prompt change. \n\n\n<img src=\"../../../ILAS_seminars/intro to python/img/KUterminalMdrive.png\" alt=\"Drawing\" style=\"width: 700px;\"/>",
"_____no_output_____"
],
[
"To see what is on the M drive type:\n\n>`dir`\n\n..and press enter.\n\nYou will see all the folders in your personal user area.\n\nDouble click on the computer icon on the desktop. \n\nDouble click on Home Directory (M:).\n\nYou should see the same folders as those listed in the terminal.\n",
"_____no_output_____"
],
[
"To navigate to documents type:\n\n>`cd Documents`\n\ncd stands for \"change directory\".\n\n\n",
"_____no_output_____"
],
[
"We can move down the filesystem of the computer by typing:\n\n>`cd`\n\nfollowed by the name of the folder we want to move to.\n\nThe folder must be:\n - on the same branch\n - one step from our current location\n\n<img src=\"../../../ILAS_seminars/intro to python/img/directory_tree.gif\" alt=\"Drawing\" style=\"width: 200px;\"/>",
"_____no_output_____"
],
[
"Type:\n \n>`dir` \n\nagain to view the contents of your Documents folder.",
"_____no_output_____"
],
[
"To move back up by one step, type:\n \n>`cd ..`\n\nTry this now.\n\n<img src=\"../../../ILAS_seminars/intro to python/img/directory_tree.gif\" alt=\"Drawing\" style=\"width: 200px;\"/>",
"_____no_output_____"
],
[
"We can move by more than one step by seperating the names of the folders using the symbol: \n\n¥\n\n(note, this is \\ or / on US and European computers, depending on the operating system)",
"_____no_output_____"
],
[
"For example, now try navigating to any folder in your Documents folder by typing:\n>`cd Documents¥folder_name`\n\nwhere `folder_name` is the name of the folder to move to.\n",
"_____no_output_____"
],
[
"And now let's go back to the main Documents folder by typing:\n> cd ..",
"_____no_output_____"
],
[
"Type:\n \n>`dir` \n\nagain to view the contents of your Documents folder.\n",
"_____no_output_____"
],
[
"## 'Cloning' the Textbook Using Git\n\nGo to the Github site we opened earlier. \n\nWe are going to download a copy of the textbook from an online *repository*.\n\nThis is referred to as *cloning*.\n\nThis will allow you to work on the textbook and save it locally on a computer.\n\nClick the button \"Clone or download\" and copy the link by presssing Ctrl , C\n\n<img src=\"../../../ILAS_seminars/intro to python/img/clone-url.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\n\n",
"_____no_output_____"
],
[
"In the terminal type `git clone`. After the word `clone` leave a space and then paste the URL that you just copied.:\n\n> `git clone` [PASTE COPIED URL HERE]\n\n\n\n`Clone` copies all the files from the repository at the URL you have entered. ",
"_____no_output_____"
],
[
"In the terminal type:\n\n> `dir`\n\nA folder called \"ILAS_python\" should have appeared. \n\nGo into the folder and view the content by typing:\n\n>`cd ILAS_pyhon`\n><br>`dir` \n\n",
"_____no_output_____"
],
[
"Hint: If you start typing a folder name and press \"tab\", the folder name autocompletes! Try it for yourself e.g. in the Documents directory type:\n\n>`cd ILAS`\n\nthen press \"tab\".",
"_____no_output_____"
],
[
"The textbook files should now have appeared in your Documents folder.",
"_____no_output_____"
],
[
"## Creating an Online Github Account\n\nThe __online Github repository__ that you cloned the textbook from belongs to me. \n\nYou are going to create your own online Github user account.\n\n\n",
"_____no_output_____"
],
[
"You will use Github to update the online version of your textbook with the changes you make to the version stored locally on the university M drive.\n\nThis means you can easily access it from outside the Kyoto University system, for example, to complete your homework. \n\nI will use your online repositories to view your work and check your progress during the course.",
"_____no_output_____"
],
[
"Open https://github.com\n\nClick \"Sign in\" at the top right hand corner. \n\n<img src=\"../../../ILAS_seminars/intro to python/img/github_signup.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n",
"_____no_output_____"
],
[
"Follow the steps to create an account, the same way as you would for a social media site for example.\n\nChoose a user name, email address, password.\n\n<img src=\"../../../ILAS_seminars/intro to python/img/github-signup.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nUse the confirmation email to complete your registration.",
"_____no_output_____"
],
[
"## Creating an Online GitHub Repository\n\nNow we are going to set up your first online repository. \n\nClick the + sign in the top right corner.\n\nChoose \"New repository\". \n\n<img src=\"../../../ILAS_seminars/intro to python/img/github_newrepo.png\" alt=\"Drawing\" style=\"width: 500px;\"/>",
"_____no_output_____"
],
[
"Choose a repository name (e.g. Python Textbook, python_textbook, Intro_to_python)\n\n<img src=\"../../../ILAS_seminars/intro to python/img/github_namerepo.jpg\" alt=\"Drawing\" style=\"width: 300px;\"/>",
"_____no_output_____"
],
[
"Leave the other settings as they are for now.\n\nWe will learn about these later in the course. \n\nClick the button \"Create repository\".\n\n<img src=\"../../../ILAS_seminars/intro to python/img/github_create_repo.jpg\" alt=\"Drawing\" style=\"width: 300px;\"/>",
"_____no_output_____"
],
[
"## Adding Files to an Online Github Repository\nWe are now going to link your local repository (stored on the computer on the M drive) to your online repository (stored at github.com). \n\nIn the terminal, make sure you are __inside__ the folder named ILAS_python.\n\nIf you are not, then navigate to the folder using \n\n>`cd`\n\n",
"_____no_output_____"
],
[
"Enter the username that you registered when setting up your account on GitHub:\n\n>`git config --global user.name \"username\"`\n\nEnter the email adress that you registered when setting up your account on GitHub:\n\n>`git config --global user.email \"[email protected]\"`",
"_____no_output_____"
],
[
"Copy the URL of your repo from the \"Quick setup\" section. \n\n<img src=\"../../../ILAS_seminars/intro to python/img/github_copyurl.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\n__NOTE__ \n<br>Earlier we copied the URL of __my repository__ (https://github.com/hphilamore/ILAS_python.git).\n<br>We used it to tell the computer where to copy files __from__.\n\n<br>Now we are copying the URL of __your repository__(https://github.com/yourGithub/yourRepo.git).\n<br>We will now use a similar procedure to tell the computer where it should copy files __to__.",
"_____no_output_____"
],
[
"First we will disconnect your local repo from __my__ online repo.\n<br>The command `git remote rm` removes (`rm`) a remote (`remote`) URL from your local repository. \n<br>Type:\n>`git remote rm origin`\n\n(*origin* is a name that was given by default to the URL you cloned the repository from). \n\n",
"_____no_output_____"
],
[
"Second we will connect your local repo to __your__ online repo.\n<br>The command `git remote add` connects (`add`) a remote (`remote`) URL to your local repository using:\n- a name for you remote (let's use origin, again) \n- a URL (the URL just just copied)\n<br>Type:\n>`git remote add origin` [PASTE COPIED URL HERE] ",
"_____no_output_____"
],
[
" \n<br>The command `git push -u` uploads (pushes) the contents of your local repository to a remote repository:\n- a remote name (ours is \"origin\") \n- a *branch* of your repository (this is a more advanced feature of github. We will use the default branch ony. It is called \"master\")\n<br>Type:\n>`git push -u origin master`\n\nA new window should open.\n\nEnter your github login details:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/GitHubLogin.png\" alt=\"Drawing\" style=\"width: 200px;\"/>\n\nReturn to the teminal. You may be prompted to enter your login details a second time:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/GitHubTermLogin.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\nYou should see a few lines of code appear, ending with the message:\n\n>`Branch master set up to track remote branch master from origin`\n",
"_____no_output_____"
],
[
"Now look again at your online GitHub page.\n\nClick on the \"code\" tab to reload the page.\n\n<img src=\"../../../ILAS_seminars/intro to python/img/github_code.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nThe textbook should now have appeared in your online repository. ",
"_____no_output_____"
],
[
"## Tracking changes using Git\n\nThroughout this course you will develop new skills by completing excercises in the interactive textbook.\n\nAt the end of the course you will have all of your notes and practise excercises in one place, accessible from almost anywhere.\n\nEach time you make changes to the textbook, save it and exit it, either in class or at home, track the changes using git abd sync them with the online repository. The following sections will show how to do this. \n\nWe are now going to:\n - use Git to record the changes you make to the textbook.\n - upload it to your online GitHub repository so that you can access it online. ",
"_____no_output_____"
],
[
"Git has a two-step process for saving changes.\n\n1. Select files for which to log changes (__\"add\"__)\n1. Log changes (__\"commit\"__)\n\nThis is an advanced feature.\n\nFor now, we will just learn to __add__ all the files in our directory (rather than selecting individual files).\n\nWhen files have been __added__ but not yet __commited__, we say they have been *staged*. \n\n<img src=\"../../../ILAS_seminars/intro to python/img/git_simple_workflow.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n",
"_____no_output_____"
],
[
"In the terminal type:\n>`git add -A`\n\nto take a snapshot of the changes to all (`-A`) the files in your local directory.\n<br>This is held at the index (stage).\n\nThen:\n>`git commit -m \"A short message explaining your changes\"`\n\nto save the changes with a message (`-m`) you can refer to to remind you what you changed. \n<br>\n\nTo avoid losing any changes, these commands are usually executed in immediate succession.\n\n",
"_____no_output_____"
],
[
"To see the commit you just made type:\n>`git log`\n\nYou will see the message you write and a long number.\n\nWe can return to this point in the your work at any time by referencing this number. \n\nType:\n>`q`\n\nto exit the log commit log.",
"_____no_output_____"
],
[
"## Updating your Online GitHub Repository \n\nWe have updated the Git repository held on the computer.\n\nThe last thing we need to do is to update your online repository. \n\nWe do this using the `push` command.\n\n<img src=\"../../../ILAS_seminars/intro to python/img/git-local-remote-workflow-cropped.png\" alt=\"Drawing\" style=\"width: 500px;\"/>",
"_____no_output_____"
],
[
"You used the `push` command when you originally uploaded the textbook to your repository.\n\nEnter exactly the same code into the terminal:\n\nType:\n\n git push -u origin master\n \nEnter your GitHub login details when prompted. \n",
"_____no_output_____"
],
[
"Go to the web browser and open the file 0_Introduction.\n\nScroll down to where you made the change. \n\nHint: look for the marker: \n\n<img src=\"../../../ILAS_seminars/intro to python/img/change.jpg\" alt=\"Drawing\" style=\"width: 100px;\"/>",
"_____no_output_____"
],
[
"<a id='InstallingSoftwareHomeUse'></a>\n## Installing Git for Home Use \n\nIt is highly recommended that you download and install the software we will use in class:\n- Jupyter notebook (anaconda)\n- Git\n\nYou will need to use this software to complete homework assigments and prepare for the exam. \n\nBoth are free to download and install.\n\nWhen running Git you do not need to use the \n\n",
"_____no_output_____"
],
[
"Anaconda (which includes Jupyter notebook) can be downloaded from: https://www.anaconda.com/download/\n><br>Python 3.6 version and Python 2.7 version are available.\n><br>Choose Python 3.6 version\n\nGit can be downloaded from: https://github.com/git-for-windows/git/releases/tag/v2.14.1.windows.1\n>Choose Git-2.14.1-64-bit.exe if you have a 64 bit operating system.\n><br> Choose Git-2.14.1-32-bit.exe if you have a 32 bit operating system.\n\nAn easy to follow download wizard will launch for each piece of software. \n\n__NOTE:__ The procedure to install git on your personal computer is different from the method we have used in the \"Installing Git\" Section of this seminar. \n",
"_____no_output_____"
],
[
"## Installing Git for On-Campus Use\n\nGit is only available in the computer lab (Room North wing 21, Academic Center Bldg., Yoshida-South Campus).\n\nTo use Git on a Kyoto University computer outside of the computer lab you need install Git in your local user area:\n\nDownload the Git program from here:\n\nhttps://github.com/git-for-windows/git/releases/tag/v2.14.1.windows.1\n\nThe version you need is: \n\nPortableGit-2.14.1-32-bit.7z.exe",
"_____no_output_____"
],
[
"When prompted, choose to __run the file__ 実行(R).\n\n<img src=\"../../../ILAS_seminars/intro to python/img/GitHubInstallRun.png\" alt=\"Drawing\" style=\"width: 200px;\"/>\n\nWhen prompted, change the location to save the file to:\n\nM:¥Documents¥PortableGit \n\n<img src=\"../../../ILAS_seminars/intro to python/img/GitLocation.png\" alt=\"Drawing\" style=\"width: 200px;\"/>\n\nPress OK\n\nThe download may take some time. \n\n\n\n\n",
"_____no_output_____"
],
[
"Once the download has completed...\n\nTo open the terminal:\n - press \"win key\" + \"R\"\n - type: __cmd__\n - press enter",
"_____no_output_____"
],
[
"In the terminal type: \n\n>`M:`\n\n...and press enter, to switch to the M drive.\n",
"_____no_output_____"
],
[
"You should see the command prompt change. \n\n\n<img src=\"../../../ILAS_seminars/intro to python/img/KUterminalMdrive.png\" alt=\"Drawing\" style=\"width: 700px;\"/>",
"_____no_output_____"
],
[
"To navigate to documents type:\n\n>`cd Documents`\n\ncd stands for \"change directory\".\n\n\n",
"_____no_output_____"
],
[
"You should now see a folder called PortableGit in the contents list of __Documents__ folder. ",
"_____no_output_____"
],
[
"Type:\n>cd PortableGit \n\nto move into your PortableGit folder.\n\nTo check git has installed type:\n\n>`git-bash.exe`\n\nA new terminal window will open. In this window type:\n\n>`git --version`\n\nIf Git has installed, the version of the program will be dipolayed. You should see something like this:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/git-version.gif\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\nClose the window.",
"_____no_output_____"
],
[
"The final thing we need to do is to tell the computer where to look for the Git program. \n\nMove one step up from the Git folder. In the original terminal window, type:\n\n> `cd ..`\n\nNow enter the following in the terminal:\n\n> PATH=M:¥Documents¥PortableGit¥bin;%PATH%\n\n(you may need to have your keyboard set to JP to achieve this)\n\n\n<img src=\"../../../ILAS_seminars/intro to python/img/windows_change_lang.png\" alt=\"Drawing\" style=\"width: 400px;\"/>\n\n\nYou can type this or __copy and paste__ it from the README section on the github page we looked at earlier.\n\n<img src=\"img/readme_.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\n__Whenever to use Git on a Kyoto University computer outside of the computer lab (Room North wing 21, Academic Center Bldg., Yoshida-South Campus), you must first opena terminal and type the line of code above to tell the computer where to look for the Git program.__ \n\n\n",
"_____no_output_____"
],
[
"The program Git has its own terminal commands.\n\nEach one starts with the word `git`\n\nYou can check git is working by typing:\n\n>`git status`\n\nYou should see something like this:\n\n<img src=\"../../../ILAS_seminars/intro to python/img/git-version.gif\" alt=\"Drawing\" style=\"width: 500px;\"/>\n",
"_____no_output_____"
],
[
"## Creating a Local Repository on your Personal Computer\n\nIn addition to the local copy of the textbook you have stored on the Kyoto University M drive, you are going to make a local copy of the interactive textbook on your personal computer so that you can make and save changes, for example when doing your homework.\n\nThis is the same process as when you initially *cloned* the textbook from the ILAS_python online repository, except this time you will clone the textbook from your personal GitHub repository.\n\nOn your personal computer...\n\nIf you have not already installed Git, then go back to <a href='#InstallingSoftwareHomeUse'>Installing Software for Home Use.</a>\n\nOpen the terminal:\n\n__On Windows:__\n - press \"win key\" + \"R\"\n - type: __cmd__\n - press enter\n \n __On Mac:__\n - Open the \"Terminal\" application \n \n __On Linux:__\n - Open the \"Terminal\" application \n \nor\n \n - press \"Ctrl\" + \"Alt\" + \"T\"\n \nUsing `cd`, navigate to where you want the folder containing the textbook to appear. \n\nIn a web browswer, open your personal Github page that you created earlier. \n\nNavigate to the online repository to which you uploaded the textbook. \n\n*Note that you can view the textbook online on the GitHub site.*\n\nClick the button \"Clone or download\" and copy the link by presssing Ctrl , C\n\n<img src=\"../../../ILAS_seminars/intro to python/img/clone-url.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\nIn the terminal type `git clone`. After the word `clone` leave a space and then paste the URL that you just copied.:\n\n> `git clone` [PASTE COPIED URL HERE]\n\nIn the terminal type:\n\n> `dir`\n\nA folder called \"ILAS_python\" should have appeared. \n\nGo into the folder and view the content by typing:\n\n>`cd ILAS_python`\n><br>`dir` ",
"_____no_output_____"
],
[
"You can now open the notebooks stored in the repository in Jupyter Notebook to complete your homework. ",
"_____no_output_____"
],
[
"## Syncronising Repositories\n\n You now have three repositiories.\n <br>__Two Local Repositories__\n - Kyoto Univeristy M drive\n - your personal computer\n \n \n<br>__One Online Repository__\n - GitHub\n \nEach repository contains a copy of the interactive Python textbook.\n\nWe want to keep the three repositories *syncronised*. The version of the textbook in each repository should be the same. When we make changes to the textbook, either using a Kyoto University computer or your personal computer, the changes should be added to both the online repository and the other local repository.\n\nThe online repository can be accessed from either a Kyoto University computer or your personal computer. It is less easy for the two local repositories to access each other.\n\nTherefore we will use GitHub as a central repository that we use to pass changes between the two local repositories. \n\n<img src=\"img/syncing_repos.png\" alt=\"Drawing\" style=\"width: 300px;\"/>",
"_____no_output_____"
],
[
"### Pushing and Pulling\n\nThis syncronisation is done using the Git commands __`push`__ and __`pull`__.\n\nWhen you have competed your homework on your personal computer, you __`push`__ the changes to GitHub.\n\n<img src=\"img/syncing_repos_home_push.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nWe use the process we have already learnt to push changes from a local repository to our online repository.\n\nLet's recap....\n\n#### Pushing Changes to an Online Repository\n\nOpen a terminal:\n\n>__On Windows:__\n> - press \"win key\" + \"R\"\n> - type: __cmd__\n> - press enter\n> \n> __On Mac:__\n> - Open the \"Terminal\" application \n> \n> __On Linux:__\n> - Open the \"Terminal\" application \n \n>or\n \n> - press \"Ctrl\" + \"Alt\" + \"T\"\n \nUsing `cd`, navigate to where you want the folder containing the textbook to appear. \n\nIn the terminal type:\n>`git add -A`\n>`git commit -m \"A short message explaining your changes\"`\n>`git push origin master`\n \nEnter your GitHub login details when prompted. \n\nYour online remote repository should now have been updated.",
"_____no_output_____"
],
[
"#### Pulling Changes to a Local Repository\nAfter commiting the changes made on your personal computer to GitHub (e.g. homework), the next time you log on to a Kyoto University computer, the local repository will be *behind* (not up-to-date with):\n- the online repository on GitHub\n- the local repository on your personal computer\n\nYou need to __pull__ the changes from the online repository to your local repository on the M drive.\n\n<img src=\"img/syncing_repos_home_to_class.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nOpen a terminal.\n\nUsing `cd`, navigate to *inside* the folder in which your textbook is stored. \n\nType:\n\n>`git pull master`\n\nEnter your GitHub login details when prompted. \n\nThe local repository on the M drive should have now been updated with the changes you made using your personal computer. \n\nAt the end of the seminar, you will once again update the online reopsitory with the changes you make in-class.\n\n<img src=\"img/syncing_repos_class_push.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\nWhen you get home you must __pull__ the changes before starting this week's homework. \n\n<img src=\"img/syncing_repos_class_to_home.png\" alt=\"Drawing\" style=\"width: 300px;\"/>\n\n#### Resolving Clashes\n\nIf the online repository is *ahead* of the local repository you are currently working on (ie. it has been updated from another repository), you are required to __pull__ the updates from the online repository to your current local repository before you can __push__ from the current local repository to the online repository. \n\nTherefore, it is best practise to always:\n- __push__ your changes at the end of a work session\n- __pull__ your changes at the begining of a work session\n\nIf you begin working on a local repository *before* pulling the most recent changes, you may cause clashes between the version held locally and the version held online. \n\nWhen you pull the changes from online, these clashes create errors which can be difficult to resolve.\n<br>(*We will cover some possible solutions if/when clashes occur later in the course*).\n\nTo avoid clashes:\n- stick to the fomat of the textbook and use the boxes provided when completeing your answers to the review questions (the importance of this is explained in the next section)\n- follow the __push__...__pull__...__push__... working format.",
"_____no_output_____"
],
[
"## Pulling the Homework Solutions from an \"Uptream\" Repository\n\nSometimes you may want to pull changes from an online repository other than your personal GitHub repository.\n\nFor example, the original repository from which you cloned the textbook will change during the course: \n- example solutions to the previous seminars review exercisess will be added weekly.\n- new chapters will be added before the second half of the course begins. \n\nTo __pull__ these changes, incorporating them into your local version of the textbook, you need to connect your local repository to the online repositories where these changes will be held. \n\n### Adding an Online Repository\n\nFirst check that your local and online repositories are syncronised by __pulling__ and __pushing__ as necessary.\n\nOpen a terminal.\n\nUsing `cd`, navigate to *inside* the folder in which your textbook is stored. \n\nType:\n>`git remote add upstream https://github.com/hphilamore/ILAS_python.git`\n\nWe have connected (`remote add`) the online repository from which we originally cloned the textbook to our local repository and called it `upstream` to distinguish it from our main online repository, `origin`.\n\nType:\n>`git fetch upstream`\n>`git merge upstream/master master`\n\nAny changes made to the original version of the textbook should now be incorporated with your local version. \n<br>*To avoid clashes between the two versions it is particularly important to stick to the format of the textbook and use the boxes provided when completeing your answers to the review questions.* \n\nLastly, remember to push your changes to your personal online repository.\n<br>Type:\n>`git push origin master`\n\n__NOTE:__ You only need to add the repository `upstream` once. Add this to the local repository on both your personal computer and the local repository on the Kyoto University system.\n\nAfter the remote repository has been added, each time you want to pull changes from `upstream` to your local repository simply:\n- First check that your local and online repositories are syncronised by __pulling__ and __pushing__ as necessary.\n- Navigate to *inside* the folder in which your textbook is stored using the terminal. \n- Type:\n>`git fetch upstream`\n>`git merge upstream/master master`\n>`git push origin master`\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb5acfaeeefe6509ca59f2c0206ad4a3051796af | 5,824 | ipynb | Jupyter Notebook | tutorials/tensorflow-keras-jupyter/examples/packages.ipynb | fsattila/docs | 3940f5d17ebbfc0bf1904287230ae58b686264b7 | [
"Apache-2.0"
] | 2 | 2016-10-28T19:08:01.000Z | 2017-06-06T10:02:57.000Z | tutorials/tensorflow-keras-jupyter/examples/packages.ipynb | fsattila/docs | 3940f5d17ebbfc0bf1904287230ae58b686264b7 | [
"Apache-2.0"
] | 3 | 2016-01-08T12:06:39.000Z | 2016-12-02T16:26:02.000Z | tutorials/tensorflow-keras-jupyter/examples/packages.ipynb | fsattila/docs | 3940f5d17ebbfc0bf1904287230ae58b686264b7 | [
"Apache-2.0"
] | 16 | 2015-11-05T12:31:17.000Z | 2022-02-22T12:50:46.000Z | 37.333333 | 75 | 0.313187 | [
[
[
"%pip list",
"Package Version \n-------------------- ------------\nabsl-py 0.9.0 \naffine 2.3.0 \nastor 0.8.1 \nastropy 4.0.1.post1 \nattrs 19.3.0 \nbackcall 0.1.0 \nbleach 3.1.4 \ncachetools 4.1.0 \ncertifi 2020.4.5.1 \nchardet 3.0.4 \nclick 7.1.1 \nclick-plugins 1.1.1 \ncligj 0.5.0 \ncycler 0.10.0 \ndask 2.14.0 \ndecorator 4.4.2 \ndefusedxml 0.6.0 \nentrypoints 0.3 \nFiona 1.8.13.post1\ngast 0.2.2 \ngeopandas 0.7.0 \ngoogle-auth 1.14.1 \ngoogle-auth-oauthlib 0.4.1 \ngoogle-pasta 0.2.0 \ngrpcio 1.28.1 \nh5py 2.10.0 \nidna 2.9 \nimageio 2.8.0 \nimgaug 0.4.0 \nimportlib-metadata 1.6.0 \nipykernel 5.2.1 \nipython 7.13.0 \nipython-genutils 0.2.0 \njedi 0.17.0 \nJinja2 2.11.2 \njoblib 0.14.1 \njson5 0.9.4 \njsonschema 3.2.0 \njupyter-client 6.1.3 \njupyter-core 4.6.3 \njupyterlab 2.1.0 \njupyterlab-server 1.1.1 \nKeras 2.3.1 \nKeras-Applications 1.0.8 \nKeras-Preprocessing 1.1.0 \nkiwisolver 1.2.0 \nMarkdown 3.2.1 \nMarkupSafe 1.1.1 \nmatplotlib 3.2.1 \nmistune 0.8.4 \nmunch 2.5.0 \nnbconvert 5.6.1 \nnbformat 5.0.6 \nnetworkx 2.4 \nnotebook 6.0.3 \nnumpy 1.18.3 \noauthlib 3.1.0 \nopencv-python 4.2.0.34 \nopt-einsum 3.2.1 \npandas 1.0.3 \npandocfilters 1.4.2 \nparso 0.7.0 \npexpect 4.8.0 \npickleshare 0.7.5 \nPillow 7.1.1 \npip 20.0.2 \npkg-resources 0.0.0 \nprometheus-client 0.7.1 \nprompt-toolkit 3.0.5 \nprotobuf 3.11.3 \nptyprocess 0.6.0 \npyasn1 0.4.8 \npyasn1-modules 0.2.8 \nPygments 2.6.1 \npyparsing 2.4.7 \npyproj 2.6.0 \npyrsistent 0.16.0 \npython-dateutil 2.8.1 \npytz 2019.3 \nPyWavelets 1.1.1 \nPyYAML 5.3.1 \npyzmq 19.0.0 \nrasterio 1.1.3 \nrequests 2.23.0 \nrequests-oauthlib 1.3.0 \nrsa 4.0 \nscikit-image 0.16.2 \nscikit-learn 0.22.2.post1\nscipy 1.4.1 \nSend2Trash 1.5.0 \nsetuptools 46.1.3 \nShapely 1.7.0 \nsix 1.14.0 \nsklearn 0.0 \nsnuggs 1.4.7 \ntensorboard 2.1.1 \ntensorflow 2.1.0 \ntensorflow-estimator 2.1.0 \ntermcolor 1.1.0 \nterminado 0.8.3 \ntestpath 0.4.4 \ntornado 6.0.4 \ntraitlets 4.3.3 \nurllib3 1.25.9 \nwcwidth 0.1.9 \nwebencodings 0.5.1 \nWerkzeug 1.0.1 \nwheel 0.34.2 \nwrapt 1.12.1 \nxarray 0.15.1 \nzipp 3.1.0 \nNote: you may need to restart the kernel to use updated packages.\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
cb5ad5fa429edfb2d3db0fe65ab260bd87519834 | 13,376 | ipynb | Jupyter Notebook | static/fundamentals/u4n2-trace-mnist.ipynb | kny4/cs344-1 | aaf674418ec7f9546ba339da95ad866d495b3961 | [
"MIT"
] | null | null | null | static/fundamentals/u4n2-trace-mnist.ipynb | kny4/cs344-1 | aaf674418ec7f9546ba339da95ad866d495b3961 | [
"MIT"
] | null | null | null | static/fundamentals/u4n2-trace-mnist.ipynb | kny4/cs344-1 | aaf674418ec7f9546ba339da95ad866d495b3961 | [
"MIT"
] | null | null | null | 22.709677 | 426 | 0.536259 | [
[
[
"# Trace Simple Image Classifier\n\nTask: trace and explain the dimensionality of each tensor in a simple image classifier.",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"from fastai.vision.all import *\nfrom fastbook import *\n\nmatplotlib.rc('image', cmap='Greys')",
"_____no_output_____"
]
],
[
[
"Get some example digits from the MNIST dataset.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_SAMPLE)",
"_____no_output_____"
],
[
"threes = (path/'train'/'3').ls().sorted()\nsevens = (path/'train'/'7').ls().sorted()\nlen(threes), len(sevens)",
"_____no_output_____"
]
],
[
[
"Here is one image:",
"_____no_output_____"
]
],
[
[
"example_3 = Image.open(threes[1])\nexample_3",
"_____no_output_____"
]
],
[
[
"To prepare to use it as input to a neural net, we first convert integers from 0 to 255 into floating point numbers between 0 and 1.",
"_____no_output_____"
]
],
[
[
"example_3_tensor = tensor(example_3).float() / 255\nexample_3_tensor.shape",
"_____no_output_____"
],
[
"height, width = example_3_tensor.shape",
"_____no_output_____"
]
],
[
[
"Our particular network will ignore the spatial relationship between the features; later we'll learn about network architectures that do pay attention to spatial neighbors. So we'll *flatten* the image tensor into 28\\*28 values.",
"_____no_output_____"
]
],
[
[
"example_3_flat = example_3_tensor.view(width * height)\nexample_3_flat.shape",
"_____no_output_____"
]
],
[
[
"## Task\n\n",
"_____no_output_____"
],
[
"We'll define a simple neural network (in the book, a 3-vs-7 classifier) as the sequential combination of 3 layers. First we define each layer:",
"_____no_output_____"
]
],
[
[
"# Define the layers. This is where you'll try changing constants.\nlinear_1 = nn.Linear(in_features=784, out_features=30)\nrelu_layer = nn.ReLU()\nlinear_2 = nn.Linear(in_features=30, out_features=1)",
"_____no_output_____"
]
],
[
[
"Then we put them together in sequence.",
"_____no_output_____"
]
],
[
[
"simple_net = nn.Sequential(\n linear_1,\n relu_layer,\n linear_2\n)",
"_____no_output_____"
]
],
[
[
"Each of `nn.Linear`, `nn.ReLU`, and `nn.Squential` are PyTorch *modules*. We can *call* a module with some input data to get the output data:",
"_____no_output_____"
]
],
[
[
"simple_net(example_3_flat)",
"_____no_output_____"
]
],
[
[
"Your turn:",
"_____no_output_____"
],
[
"1. Obtain the same result as the line above by applying each layer in sequence.\n\nThe outputs of each layer are called *activations*, so we can name the variables `act1` for the activations of layer 1, and so forth. Each `act` will be a function of the previous `act` (or the `inp`ut, for the first layer.)",
"_____no_output_____"
]
],
[
[
"inp = example_3_flat\nact1 = ...",
"_____no_output_____"
],
[
"act2 = ...",
"_____no_output_____"
],
[
"act3 = ...",
"_____no_output_____"
]
],
[
[
"2. Evaluate `act1`, `act2`, and `act3`. (Code already provided; look at the results.)",
"_____no_output_____"
]
],
[
[
"act1",
"_____no_output_____"
],
[
"act2",
"_____no_output_____"
],
[
"act3",
"_____no_output_____"
]
],
[
[
"2. Evaluate the `shape` of `act1`, `act2`, and `act3`.",
"_____no_output_____"
]
],
[
[
"# your code here",
"_____no_output_____"
]
],
[
[
"3. Write expressions for the shapes of each activation in terms of `linear_1.in_features`, `linear_2.out_features`, etc. (ignore the `torch.Size(` part)",
"_____no_output_____"
]
],
[
[
"linear_1.in_features",
"_____no_output_____"
],
[
"act1_shape = [...]\nact2_shape = [...]\nact3_shape = [...]\n\nassert list(act1_shape) == list(act1.shape)\nassert list(act2_shape) == list(act2.shape)\nassert list(act3_shape) == list(act3.shape)",
"_____no_output_____"
]
],
[
[
"4. Evaluate the `shape` of `linear_1.weight`, `linear_1.bias`, and the same for `linear_2`. Write expressions that give the value of each shape in terms of the `in_features` and other parameters.",
"_____no_output_____"
]
],
[
[
"print(f\"Linear 1: Weight shape is {list(linear_1.weight.shape)}, bias shape is {list(linear_1.bias.shape)}\")\nprint(f\"Linear 2: Weight shape is {list(linear_2.weight.shape)}, bias shape is {list(linear_2.bias.shape)}\")",
"Linear 1: Weight shape is [30, 784], bias shape is [30]\nLinear 2: Weight shape is [1, 30], bias shape is [1]\n"
],
[
"linear_1_weight_shape = [...]\nlinear_1_bias_shape = [...]\nlinear_2_weight_shape = [...]\nlinear_2_bias_shape = [...]",
"_____no_output_____"
],
[
"assert list(linear_1_weight_shape) == list(linear_1.weight.shape)\nassert list(linear_1_bias_shape) == list(linear_1.bias.shape)\nassert list(linear_2_weight_shape) == list(linear_2.weight.shape)\nassert list(linear_2_bias_shape) == list(linear_2.bias.shape)",
"_____no_output_____"
]
],
[
[
"## Analysis",
"_____no_output_____"
],
[
"1. Try changing each of the constants provided to the `nn.Linear` modules. Identify an example of:\n 1. A constant that can be freely changed in the neural net definition.\n 2. A constant that cannot be changed because it depends on the input.\n 3. A pair of constants that must be changed together.",
"_____no_output_____"
],
[
"*your answer here*",
"_____no_output_____"
],
[
"2. Describe the relationship between the values in `act1` and `act2`.",
"_____no_output_____"
],
[
"*your answer here*",
"_____no_output_____"
],
[
"3. In a concise but complete sentence, describe the shapes of the parameters of the `Linear` layer (`weight` and `bias`).",
"_____no_output_____"
],
[
"*your answer here*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb5ad604a1f9273d1cd5785e9e5328f94877bef2 | 367,115 | ipynb | Jupyter Notebook | Revision/ANCOM/WGS/WGS_ANCOM.ipynb | bryansho/PCOS_WGS_16S_metabolome | 634663df6f674c5151f4a61339cc2b0354e469e8 | [
"MIT"
] | 3 | 2021-01-04T08:19:48.000Z | 2021-03-16T08:15:55.000Z | Revision/ANCOM/WGS/WGS_ANCOM.ipynb | bryansho/PCOS_WGS_16S_metabolome | 634663df6f674c5151f4a61339cc2b0354e469e8 | [
"MIT"
] | null | null | null | Revision/ANCOM/WGS/WGS_ANCOM.ipynb | bryansho/PCOS_WGS_16S_metabolome | 634663df6f674c5151f4a61339cc2b0354e469e8 | [
"MIT"
] | 2 | 2020-12-04T08:02:04.000Z | 2021-06-07T16:00:40.000Z | 303.150289 | 82,300 | 0.911551 | [
[
[
"# ANCOM: WGS",
"_____no_output_____"
]
],
[
[
"library(tidyverse)\nlibrary(magrittr)\nsource(\"/Users/Cayla/ANCOM/scripts/ancom_v2.1.R\")",
"_____no_output_____"
]
],
[
[
"## T2",
"_____no_output_____"
]
],
[
[
"t2 <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T2/T2_filtered_greater_00001.csv')\nhead(t2,n=1)",
"Warning message:\n“Missing column names filled in: 'X1' [1]”\n\n\u001b[36m──\u001b[39m \u001b[1m\u001b[1mColumn specification\u001b[1m\u001b[22m \u001b[36m──────────────────────────────────────────────────\u001b[39m\ncols(\n .default = col_double(),\n X1 = \u001b[31mcol_character()\u001b[39m\n)\n\u001b[36mℹ\u001b[39m Use \u001b[30m\u001b[47m\u001b[30m\u001b[47m`spec()`\u001b[47m\u001b[30m\u001b[49m\u001b[39m for the full column specifications.\n\n\n"
],
[
"t2.meta <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T2/Deseq2_T2_mapping.csv')\nhead(t2.meta,n=1)",
"\n\u001b[36m──\u001b[39m \u001b[1m\u001b[1mColumn specification\u001b[1m\u001b[22m \u001b[36m──────────────────────────────────────────────────\u001b[39m\ncols(\n Sample = \u001b[31mcol_character()\u001b[39m,\n Treatment = \u001b[31mcol_character()\u001b[39m,\n Timepoint = \u001b[32mcol_double()\u001b[39m\n)\n\n\n"
],
[
"# subset data\nt2.meta.PvL <- t2.meta %>% filter(Treatment == 'Placebo' | Treatment == 'Let')\nt2.PvL <- t2 %>% select(X1, any_of(t2.meta.PvL$Sample)) %>% column_to_rownames('X1')\n\nt2.meta.LvLCH <- t2.meta %>% filter(Treatment == 'Let' | Treatment == 'CoL')\nt2.LvLCH <- t2 %>% select(X1, any_of(t2.meta.LvLCH$Sample)) %>% column_to_rownames('X1')",
"_____no_output_____"
]
],
[
[
"### Placebo vs. Let",
"_____no_output_____"
]
],
[
[
"# Data Preprocessing\n\n# feature_table is a df/matrix with features as rownames and samples in columns\nfeature_table <- t2.PvL \n\n# character vector/column containing sample IDs\nsample_var <- \"Sample\"\n\n# grouping variable to detect structural zeros and outliers\ngroup_var <- \"Treatment\"\n\n# 0 < fraction < 1. For each feature, observations with proportion of mixture \n# distribution < out_cut will be detected as outlier zeros;\n# > (1 - out_cut) will be detected as outlier values\nout_cut <- 0.05\n\n# 0 < fraction < 1. Features with proportion of zeros > zero_cut are removed.\nzero_cut <- 0.90 \n\n# samples with library size < lib_cut will be excluded in the analysis\nlib_cut <- 0\n\n# TRUE indicates a taxon would be classified as a structural zero in the \n# corresponding experimental group using its asymptotic lower bound. More \n# specifically, ```neg_lb = TRUE``` indicates you are using both criteria \n# stated in section 3.2 of [ANCOM-II]\n# (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5682008/) to detect structural\n# zeros; Otherwise, ```neg_lb = FALSE``` will only use the equation 1 in \n# section 3.2 of [ANCOM-II](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5682008/)\n# for declaring structural zeros.\nneg_lb <- TRUE\n\nprepro <- feature_table_pre_process(feature_table, t2.meta.PvL, sample_var, group_var, \n out_cut, zero_cut, lib_cut, neg_lb)\n\n# Preprocessed feature table\nfeature_table1 <- prepro$feature_table\n\n# Preprocessed metadata\nmeta_data1 <- prepro$meta_data \n\n# Structural zero info\nstruc_zero1 <- prepro$structure_zeros ",
"_____no_output_____"
],
[
"# Run ANCOM\n\n# name of the main variable of interest (character)\nmain_var <- \"Treatment\"\n\np_adj_method <- \"BH\" # number of taxa > 10, therefore use Benjamini-Hochberg correction\n\nalpha <- 0.05\n\n# character string representing the formula for adjustment\nadj_formula <- NULL \n\n# character string representing the formula for random effects in lme\nrand_formula <- NULL\n\nt_start <- Sys.time()\n\nres <- ANCOM(feature_table1, meta_data1, struc_zero1, main_var, p_adj_method, \n alpha, adj_formula, rand_formula)\n\nt_end <- Sys.time()\nt_end - t_start \n\n# write output to file\n# output contains the \"W\" statistic for each taxa - a count of the number of times \n# the null hypothesis is rejected for each taxa\n# detected_x are logicals indicating detection at specified FDR cut-off\nwrite_csv(res$out, \"2021-07-25_WGS_T2_PvL_ANCOM_data.csv\")",
"_____no_output_____"
],
[
"n_taxa <- ifelse(is.null(struc_zero1), nrow(feature_table1), sum(apply(struc_zero1, 1, sum) == 0))\nres$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))\nggsave(filename = paste(lubridate::today(),'volcano_WGS_T2_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')",
"_____no_output_____"
],
[
"# to find most significant taxa, I will sort the data \n# 1) y (W statistic)\n# 2) according to the absolute value of CLR mean difference\nsig <- res$fig$data %>% \n mutate(taxa_id = str_split_fixed(res$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'\n arrange(desc(y), desc(abs(x))) %>% \n filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa\n\nwrite.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T2_PvL.csv',sep='_'))",
"_____no_output_____"
],
[
"# save features with W > 0 \nnon.zero <- res$fig$data %>% \n arrange(desc(y), desc(abs(x))) %>% \n mutate(taxa_id = str_split_fixed(res$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'\n W.proportion = y/(n_taxa-1)) %>% # add W \n filter(y > 0) %>% \n rowid_to_column()\n \nwrite.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T2_PvL.csv',sep='_'))",
"_____no_output_____"
],
[
"# plot top 20 taxa\nsig %>% \n slice_head(n=20) %>% \n ggplot(aes(x, taxa_id)) +\n geom_point(aes(size = 1)) +\n theme_bw(base_size = 16) + \n guides(size = FALSE) +\n labs(x = 'CLR Mean Difference', y = NULL)\n\nggsave(filename = paste(lubridate::today(),'Top20_WGS_T2_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width = 10)",
"Saving 10 x 7 in image\n\n"
]
],
[
[
"### Let v Let-co-housed",
"_____no_output_____"
]
],
[
[
"# Data Preprocessing\n\nfeature_table <- t2.LvLCH \nsample_var <- \"Sample\"\ngroup_var <- \"Treatment\"\nout_cut <- 0.05\nzero_cut <- 0.90 \nlib_cut <- 0\nneg_lb <- TRUE\n\nprepro <- feature_table_pre_process(feature_table, t2.meta.LvLCH, sample_var, group_var, \n out_cut, zero_cut, lib_cut, neg_lb)\n\n# Preprocessed feature table\nfeature_table2 <- prepro$feature_table\n\n# Preprocessed metadata\nmeta_data2 <- prepro$meta_data \n\n# Structural zero info\nstruc_zero2 <- prepro$structure_zeros ",
"_____no_output_____"
],
[
"# Run ANCOM\n\n# name of the main variable of interest (character)\nmain_var <- \"Treatment\"\n\np_adj_method <- \"BH\" # number of taxa > 10, therefore use Benjamini-Hochberg correction\n\nalpha <- 0.05\n\n# character string representing the formula for adjustment\nadj_formula <- NULL \n\n# character string representing the formula for random effects in lme\nrand_formula <- NULL\n\nt_start <- Sys.time()\n\nres2 <- ANCOM(feature_table2, meta_data2, struc_zero2, main_var, p_adj_method, \n alpha, adj_formula, rand_formula)\n\nt_end <- Sys.time()\nt_end - t_start \n\n# write output to file\n# output contains the \"W\" statistic for each taxa - a count of the number of times \n# the null hypothesis is rejected for each taxa\n# detected_x are logicals indicating detection at specified FDR cut-off\nwrite_csv(res2$out, \"2021-07-25_WGS_T2_LvLCH_ANCOM_data.csv\")",
"_____no_output_____"
],
[
"n_taxa <- ifelse(is.null(struc_zero2), nrow(feature_table2), sum(apply(struc_zero2, 1, sum) == 0))\nres2$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))\nggsave(filename = paste(lubridate::today(),'volcano_WGS_T2_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')",
"_____no_output_____"
],
[
"# save features with W > 0 \nnon.zero <- res2$fig$data %>% \n arrange(desc(y), desc(abs(x))) %>% \n mutate(taxa_id = str_split_fixed(res2$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'\n W.proportion = y/(n_taxa-1)) %>% # add W \n filter(y > 0) %>% \n rowid_to_column()\n \nwrite.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T2_LvLCH.csv',sep='_'))",
"_____no_output_____"
],
[
"# to find most significant taxa, I will sort the data \n# 1) y (W statistic)\n# 2) according to the absolute value of CLR mean difference\nsig <- res2$fig$data %>% \n mutate(taxa_id = str_split_fixed(res2$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'\n arrange(desc(y), desc(abs(x))) %>% \n filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa\n\nwrite.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T2_LvLCH.csv',sep='_'))",
"_____no_output_____"
],
[
"# plot top 20 taxa\nsig %>% \n slice_head(n=20) %>% \n ggplot(aes(x, taxa_id)) +\n geom_point(aes(size = 1)) +\n theme_bw(base_size = 16) + \n guides(size = FALSE) +\n labs(x = 'CLR Mean Difference', y = NULL)\n\nggsave(filename = paste(lubridate::today(),'Top20_WGS_T2_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width = 10)",
"Saving 10 x 7 in image\n\n"
]
],
[
[
"## T5",
"_____no_output_____"
]
],
[
[
"t5 <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T5/T5_filtered_greater_00001.csv')\nhead(t5,n=1)",
"Warning message:\n“Missing column names filled in: 'X1' [1]”\n\n\u001b[36m──\u001b[39m \u001b[1m\u001b[1mColumn specification\u001b[1m\u001b[22m \u001b[36m──────────────────────────────────────────────────\u001b[39m\ncols(\n .default = col_double(),\n X1 = \u001b[31mcol_character()\u001b[39m\n)\n\u001b[36mℹ\u001b[39m Use \u001b[30m\u001b[47m\u001b[30m\u001b[47m`spec()`\u001b[47m\u001b[30m\u001b[49m\u001b[39m for the full column specifications.\n\n\n"
],
[
"t5.meta <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T5/Deseq2_T5_mapping.csv')\nhead(t5.meta,n=1)",
"\n\u001b[36m──\u001b[39m \u001b[1m\u001b[1mColumn specification\u001b[1m\u001b[22m \u001b[36m──────────────────────────────────────────────────\u001b[39m\ncols(\n SampleID = \u001b[31mcol_character()\u001b[39m,\n Treatment = \u001b[31mcol_character()\u001b[39m,\n Timepoint = \u001b[32mcol_double()\u001b[39m\n)\n\n\n"
],
[
"# subset data\nt5.meta.PvL <- t5.meta %>% filter(Treatment == 'Placebo' | Treatment == 'Let')\nt5.PvL <- t5 %>% select(X1, any_of(t5.meta.PvL$SampleID)) %>% column_to_rownames('X1')\n\nt5.meta.LvLCH <- t5.meta %>% filter(Treatment == 'Let' | Treatment == 'CoL')\nt5.LvLCH <- t5 %>% select(X1, any_of(t5.meta.LvLCH$SampleID)) %>% column_to_rownames('X1')",
"_____no_output_____"
]
],
[
[
"### Placebo v Let",
"_____no_output_____"
]
],
[
[
"# Data Preprocessing\n\nfeature_table <- t5.PvL \nsample_var <- \"SampleID\"\ngroup_var <- \"Treatment\"\nout_cut <- 0.05\nzero_cut <- 0.90 \nlib_cut <- 0\nneg_lb <- TRUE\n\nprepro <- feature_table_pre_process(feature_table, t5.meta.PvL, sample_var, group_var, \n out_cut, zero_cut, lib_cut, neg_lb)\n\n# Preprocessed feature table\nfeature_table3 <- prepro$feature_table\n\n# Preprocessed metadata\nmeta_data3 <- prepro$meta_data \n\n# Structural zero info\nstruc_zero3 <- prepro$structure_zeros ",
"_____no_output_____"
],
[
"# Run ANCOM\n\n# name of the main variable of interest (character)\nmain_var <- \"Treatment\"\n\np_adj_method <- \"BH\" # number of taxa > 10, therefore use Benjamini-Hochberg correction\n\nalpha <- 0.05\n\n# character string representing the formula for adjustment\nadj_formula <- NULL \n\n# character string representing the formula for random effects in lme\nrand_formula <- NULL\n\nt_start <- Sys.time()\n\nres3 <- ANCOM(feature_table3, meta_data3, struc_zero3, main_var, p_adj_method, \n alpha, adj_formula, rand_formula)\n\nt_end <- Sys.time()\nt_end - t_start \n\n# write output to file\n# output contains the \"W\" statistic for each taxa - a count of the number of times \n# the null hypothesis is rejected for each taxa\n# detected_x are logicals indicating detection at specified FDR cut-off\nwrite_csv(res3$out, \"2021-07-25_WGS_T5_PvL_ANCOM_data.csv\")",
"_____no_output_____"
],
[
"n_taxa <- ifelse(is.null(struc_zero3), nrow(feature_table3), sum(apply(struc_zero3, 1, sum) == 0))\nres3$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))\nggsave(filename = paste(lubridate::today(),'volcano_WGS_T5_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')",
"_____no_output_____"
],
[
"# save features with W > 0 \nnon.zero <- res3$fig$data %>% \n arrange(desc(y), desc(abs(x))) %>% \n mutate(taxa_id = str_split_fixed(res3$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'\n W.proportion = y/(n_taxa-1)) %>% # add W \n filter(y > 0) %>% \n rowid_to_column()\n \nwrite.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T5_PvL.csv',sep='_'))",
"_____no_output_____"
],
[
"# to find most significant taxa, I will sort the data \n# 1) y (W statistic)\n# 2) according to the absolute value of CLR mean difference\nsig <- res3$fig$data %>% \n mutate(taxa_id = str_split_fixed(res3$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'\n arrange(desc(y), desc(abs(x))) %>% \n filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa\n\nwrite.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T5_PvL.csv',sep='_'))",
"_____no_output_____"
],
[
"# plot top 20 taxa\nsig %>% \n slice_head(n=20) %>% \n ggplot(aes(x, taxa_id)) +\n geom_point(aes(size = 1)) +\n theme_bw(base_size = 16) + \n guides(size = FALSE) +\n labs(x = 'CLR Mean Difference', y = NULL)\n\nggsave(filename = paste(lubridate::today(),'Top20_WGS_T5_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width = 10)",
"Saving 10 x 7 in image\n\n"
]
],
[
[
"### Let v Let-co-housed",
"_____no_output_____"
]
],
[
[
"# Data Preprocessing\n\nfeature_table <- t5.LvLCH \nsample_var <- \"SampleID\"\ngroup_var <- \"Treatment\"\nout_cut <- 0.05\nzero_cut <- 0.90 \nlib_cut <- 0\nneg_lb <- TRUE\n\nprepro <- feature_table_pre_process(feature_table, t5.meta.LvLCH, sample_var, group_var, \n out_cut, zero_cut, lib_cut, neg_lb)\n\n# Preprocessed feature table\nfeature_table4 <- prepro$feature_table\n\n# Preprocessed metadata\nmeta_data4 <- prepro$meta_data \n\n# Structural zero info\nstruc_zero4 <- prepro$structure_zeros ",
"_____no_output_____"
],
[
"# Run ANCOM\n\n# name of the main variable of interest (character)\nmain_var <- \"Treatment\"\n\np_adj_method <- \"BH\" # number of taxa > 10, therefore use Benjamini-Hochberg correction\n\nalpha <- 0.05\n\n# character string representing the formula for adjustment\nadj_formula <- NULL \n\n# character string representing the formula for random effects in lme\nrand_formula <- NULL\n\nt_start <- Sys.time()\n\nres4 <- ANCOM(feature_table4, meta_data4, struc_zero4, main_var, p_adj_method, \n alpha, adj_formula, rand_formula)\n\nt_end <- Sys.time()\nt_end - t_start \n\n# write output to file\n# output contains the \"W\" statistic for each taxa - a count of the number of times \n# the null hypothesis is rejected for each taxa\n# detected_x are logicals indicating detection at specified FDR cut-off\nwrite_csv(res4$out, \"2021-07-25_WGS_T5_LvLCH_ANCOM_data.csv\")",
"_____no_output_____"
],
[
"n_taxa <- ifelse(is.null(struc_zero4), nrow(feature_table4), sum(apply(struc_zero4, 1, sum) == 0))\nres4$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))\nggsave(filename = paste(lubridate::today(),'volcano_WGS_T5_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')",
"Saving 7 x 7 in image\n\n"
],
[
"# save features with W > 0 \nnon.zero <- res4$fig$data %>% \n arrange(desc(y), desc(abs(x))) %>% \n mutate(taxa_id = str_split_fixed(res4$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'\n W.proportion = y/(n_taxa-1)) %>% # add W \n filter(y > 0) %>% \n rowid_to_column()\n \nwrite.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T5_LvLCH.csv',sep='_'))",
"_____no_output_____"
],
[
"# to find most significant taxa, I will sort the data \n# 1) y (W statistic)\n# 2) according to the absolute value of CLR mean difference\nsig <- res4$fig$data %>% \n mutate(taxa_id = str_split_fixed(res4$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'\n arrange(desc(y), desc(abs(x))) %>% \n filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa\n\nwrite.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T5_LvLCH.csv',sep='_'))",
"_____no_output_____"
],
[
"# plot top 20 taxa\nsig %>% \n slice_head(n=20) %>% \n ggplot(aes(x, taxa_id)) +\n geom_point(aes(size = 1)) +\n theme_bw(base_size = 16) + \n guides(size = FALSE) +\n labs(x = 'CLR Mean Difference', y = NULL)\n\nggsave(filename = paste(lubridate::today(),'Top20_WGS_T5_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width=10)",
"ERROR while rich displaying an object: Error: Aesthetics must be either length 1 or the same as the data (1): x and y\n\nTraceback:\n1. FUN(X[[i]], ...)\n2. tryCatch(withCallingHandlers({\n . if (!mime %in% names(repr::mime2repr)) \n . stop(\"No repr_* for mimetype \", mime, \" in repr::mime2repr\")\n . rpr <- repr::mime2repr[[mime]](obj)\n . if (is.null(rpr)) \n . return(NULL)\n . prepare_content(is.raw(rpr), rpr)\n . }, error = error_handler), error = outer_handler)\n3. tryCatchList(expr, classes, parentenv, handlers)\n4. tryCatchOne(expr, names, parentenv, handlers[[1L]])\n5. doTryCatch(return(expr), name, parentenv, handler)\n6. withCallingHandlers({\n . if (!mime %in% names(repr::mime2repr)) \n . stop(\"No repr_* for mimetype \", mime, \" in repr::mime2repr\")\n . rpr <- repr::mime2repr[[mime]](obj)\n . if (is.null(rpr)) \n . return(NULL)\n . prepare_content(is.raw(rpr), rpr)\n . }, error = error_handler)\n7. repr::mime2repr[[mime]](obj)\n8. repr_text.default(obj)\n9. paste(capture.output(print(obj)), collapse = \"\\n\")\n10. capture.output(print(obj))\n11. evalVis(expr)\n12. withVisible(eval(expr, pf))\n13. eval(expr, pf)\n14. eval(expr, pf)\n15. print(obj)\n16. print.ggplot(obj)\n17. ggplot_build(x)\n18. ggplot_build.ggplot(x)\n19. by_layer(function(l, d) l$compute_aesthetics(d, plot))\n20. f(l = layers[[i]], d = data[[i]])\n21. l$compute_aesthetics(d, plot)\n22. f(..., self = self)\n23. check_aesthetics(evaled, n)\n24. abort(glue(\"Aesthetics must be either length 1 or the same as the data ({n}): \", \n . glue_collapse(names(which(!good)), \", \", last = \" and \")))\n25. signal_abort(cnd)\nSaving 10 x 7 in image\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5ad747a55ba0bff00ac6b19e8613bce06867a1 | 364,173 | ipynb | Jupyter Notebook | notebooks/plots.ipynb | kylehiroyasu/stochastic-YOLO | ac05bcab6a364b877f53d78026cf1a7018944e3e | [
"Apache-2.0"
] | null | null | null | notebooks/plots.ipynb | kylehiroyasu/stochastic-YOLO | ac05bcab6a364b877f53d78026cf1a7018944e3e | [
"Apache-2.0"
] | null | null | null | notebooks/plots.ipynb | kylehiroyasu/stochastic-YOLO | ac05bcab6a364b877f53d78026cf1a7018944e3e | [
"Apache-2.0"
] | null | null | null | 245.565071 | 190,394 | 0.884008 | [
[
[
"import json\nimport os\nfrom pathlib import Path\n\nimport matplotlib \nmatplotlib.rcParams['font.family'] = ['Noto Serif CJK JP']\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom sklearn import datasets\nfrom sklearn.metrics import brier_score_loss\nfrom sklearn.calibration import calibration_curve\n",
"_____no_output_____"
],
[
"ROOT = Path('/home/kylehiroyasu/programming/masters_thesis/stochastic-YOLO/results_21_09_2021')",
"_____no_output_____"
],
[
"files = [f for f in os.listdir(ROOT) if 'csv' in f]\nfiles.sort()\nfiles",
"_____no_output_____"
],
[
"files = [\n 'ccpd_blur.csv',\n 'ccpd_blur_dropout.csv',\n 'ccpd_blur_ensemble.csv',\n 'ccpd_challenge.csv',\n 'ccpd_challenge_dropout.csv',\n 'ccpd_challenge_ensemble.csv',\n 'ccpd_db.csv',\n 'ccpd_db_dropout.csv',\n 'ccpd_db_ensemble.csv',\n 'ccpd_fn.csv',\n 'ccpd_fn_dropout.csv',\n 'ccpd_fn_ensemble.csv',\n 'ccpd_rotate.csv',\n 'ccpd_rotate_dropout.csv',\n 'ccpd_rotate_ensemble.csv',\n 'ccpd_tilt.csv',\n 'ccpd_tilt_dropout.csv',\n 'ccpd_tilt_ensemble.csv',\n 'ccpd.csv',\n 'ccpd_dropout.csv',\n 'ccpd_ensemble.csv',\n 'ccpd_weather.csv',\n 'ccpd_weather_dropout.csv',\n 'ccpd_weather_ensemble.csv'\n]",
"_____no_output_____"
],
[
"groups = ['blur', 'challenge', 'db', 'fn', 'rotate', 'tilt', 'val']\ngroups = ['Blur', 'Challenge', 'DB', 'FN', 'Rotate', 'Tilt', 'Base', 'Weather']",
"_____no_output_____"
],
[
"def load_data(path: str):\n all_predictions = []\n with open(path, mode='r') as f:\n for line in f.readlines():\n prediction = json.loads(line)\n for correct, confidence, bbv, entropy in zip(prediction['correct'], prediction['confidence'], prediction['bounding_box_variance'], prediction['entropy']):\n data = {\n 'image_name': prediction['image_name'],\n 'correct': correct[0],\n 'confidence': confidence,\n 'bounding_box_variance':bbv,\n 'entropy': entropy\n }\n all_predictions.append(data)\n return pd.DataFrame(all_predictions)",
"_____no_output_____"
],
[
"def plot_calibration_curve(data_dict: dict, dataset_name: str, fig_index):\n\n\n fig = plt.figure(fig_index, figsize=(10, 10))\n ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)\n ax2 = plt.subplot2grid((3, 1), (2, 0))\n\n ax1.plot([0, 1], [0, 1], \"k:\", label=\"Perfectly calibrated\")\n for name, df in data_dict.items():\n y_test = df.correct\n prob_pos = df.confidence\n clf_score = brier_score_loss(y_test, prob_pos, pos_label=1)\n #print(\"%s:\" % name)\n #print(\"\\tBrier: %1.3f\" % (clf_score))\n #print(\"\\tPrecision: %1.3f\" % precision_score(y_test, y_pred))\n #print(\"\\tRecall: %1.3f\" % recall_score(y_test, y_pred))\n #print(\"\\tF1: %1.3f\\n\" % f1_score(y_test, y_pred))\n\n fraction_of_positives, mean_predicted_value = \\\n calibration_curve(y_test, prob_pos, n_bins=10)\n\n # ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\",\n # label=\"%s (%1.3f)\" % (name, clf_score))\n ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\",\n label=name)\n\n ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,\n histtype=\"step\", lw=2)\n\n ax1.set_ylabel(\"Fraction of positives\")\n ax1.set_ylim([-0.05, 1.05])\n ax1.legend(loc=\"lower right\")\n ax1.set_title(f'Calibration plots {dataset_name} (reliability curve)')\n\n ax2.set_xlabel(\"Mean predicted value\")\n ax2.set_ylabel(\"Count\")\n ax2.legend(loc=\"upper center\", ncol=2)\n \n plt.tight_layout()\n plt.savefig(f'{dataset_name}_detection.svg')",
"_____no_output_____"
],
[
"datasets = []\nfor i in range(0, len(files), 3):\n data = load_data(ROOT/files[i])\n dropout_data = load_data(ROOT/files[i+1])\n ensemble_data = load_data(ROOT/files[i+2])\n datasets.append({\n 'Normal': data,\n 'MC-Dropout (10 samples)': dropout_data,\n 'Ensemble (N=3)': ensemble_data\n })",
"_____no_output_____"
],
[
"all_results = []\nfor group, data in zip(groups, datasets):\n for k, df in data.items():\n df['model'] = k\n df['dataset'] = group\n all_results.append(df)\ndataset_results = pd.concat(all_results)\n",
"_____no_output_____"
],
[
"mean_results = dataset_results.groupby(by=['dataset','model']).mean()\nmean_results",
"_____no_output_____"
],
[
"mean_results.entropy.plot.barh()",
"_____no_output_____"
],
[
"summary = pd.read_csv(ROOT/'all_test_results.csv', index_col=0)\ndataset = {\n 'ccpd_blur.data': 'Blur', \n 'ccpd_challenge.data': 'Challenge', \n 'ccpd_db.data': 'DB',\n 'ccpd_fn.data': 'FN', \n 'ccpd_rotate.data': 'Rotate', \n 'ccpd_tilt.data': 'Tilt', \n 'ccpd.data': 'Base',\n 'ccpd_weather.data':'Weather'\n}\nmodel = {\n 'ensemble': 'Ensemble (N=3)',\n 'dropout': 'MC-Dropout (10 samples)',\n 'normal': 'Normal'\n}\nrename_columns = ['Data', 'do', 'Model','MP', 'MR', 'MAP', 'MF1']\nsummary.columns = rename_columns\nsummary['Data'] = summary.Data.apply(lambda d: dataset[d])\nsummary['Model'] = summary.Model.apply(lambda d: model[d])\nsummary_piv = summary.pivot(index='Model', columns='Data', values='MAP')\nsummary_piv",
"_____no_output_____"
],
[
"print(summary_piv.to_latex(float_format=\"%0.3f\"))",
"\\begin{tabular}{lrrrrrrrr}\n\\toprule\nData & Base & Blur & Challenge & DB & FN & Rotate & Tilt & Weather \\\\\nModel & & & & & & & & \\\\\n\\midrule\nEnsemble (N=3) & 0.995 & 0.967 & 0.990 & 0.927 & 0.960 & 0.993 & 0.992 & 0.995 \\\\\nMC-Dropout (10 samples) & 0.995 & 0.966 & 0.988 & 0.924 & 0.958 & 0.993 & 0.992 & 0.995 \\\\\nNormal & 0.995 & 0.966 & 0.988 & 0.926 & 0.959 & 0.993 & 0.992 & 0.995 \\\\\n\\bottomrule\n\\end{tabular}\n\n"
],
[
"results = mean_results.reset_index()\nresults['id'] = results.dataset + results.model",
"_____no_output_____"
],
[
"summary\nsummary['id'] = summary.Data + summary.Model",
"_____no_output_____"
],
[
"merged = pd.merge(summary, results, on='id')\nmerged",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(ncols=3, sharex=True, sharey='row', figsize=(15,5))\n\nfor i, (name, group) in enumerate(merged.groupby(by='Model')):\n group.plot.scatter(x='entropy', y='MAP',xlabel='Average Entropy', ax=axs[i])\n axs[i].set_title(name)\nplt.savefig(f'map_vs_entropy.svg')\n",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(ncols=3, sharex=True, sharey='row', figsize=(15,5))\n\nfor i, (name, group) in enumerate(merged.groupby(by='Model')):\n group.plot.scatter(x='bounding_box_variance', y='MAP',xlabel='Average Bounding Box Variance', ax=axs[i])\n axs[i].set_title(name)\nplt.savefig(f'map_vs_variance.svg')\n",
"_____no_output_____"
],
[
"#fig = plt.figure(figsize=(20, 20))\n#axes.Axes(fig, (0,0,3,8), sharex=True)\nfig, axs = plt.subplots(len(groups), 3, sharex=True, sharey='row', figsize=(15,15))\n\nfor i, (name, data_dict) in enumerate(zip(groups, datasets)):\n for j, (key, df) in enumerate(data_dict.items()):\n #df.groupby(by=['correct'])['entropy'].plot.hist(bins=10, ax=axs[i,j], alpha=.5)\n df.groupby(by=['correct'])['confidence'].plot.hist(bins=10, ax=axs[i,j], alpha=.5)\n\nfor i, k in enumerate(data_dict.keys()):\n axs[0, i].set_title(k, fontsize=14)\n \nfor i, k in enumerate(groups):\n axs[i, 0].set_ylabel(k, rotation=0, fontsize=14, labelpad=100)\n \nlines, labels = fig.axes[-1].get_legend_handles_labels()\nfig.legend(lines, labels, loc = 'upper right')\n\nplt.tight_layout()\nplt.savefig(f'detection_hist.svg')\nplt.show()",
"_____no_output_____"
],
[
"def plot_calibration_curve(data_dict: dict, dataset_name: str, ax1):\n\n ax1.plot([0, 1], [0, 1], \"k:\", label=\"Perfectly calibrated\")\n for name, df in data_dict.items():\n y_test = df.correct\n prob_pos = df.confidence\n clf_score = brier_score_loss(y_test, prob_pos, pos_label=1)\n\n fraction_of_positives, mean_predicted_value = \\\n calibration_curve(y_test, prob_pos, n_bins=10)\n\n ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\",\n label=name)\n\n ax1.set_ylabel(\"Fraction of positives\")\n ax1.set_xlabel(\"Predicted Probability\")\n ax1.set_ylim([-0.05, 1.05])\n ax1.set_title(f'{dataset_name}')\n \n\n",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(4, 2, sharex=True, sharey=True, figsize=(15,15))\n\nfor i, (name, data_dict) in enumerate(zip(groups, datasets)):\n row = i % 4\n col = int(i/4)\n plot_calibration_curve(data_dict=data_dict, dataset_name=name, ax1=axs[row, col])\n\nlines, labels = fig.axes[-1].get_legend_handles_labels()\nfig.legend(lines, labels, loc = 'upper right',borderaxespad=0.)\nfig\n\n\nplt.tight_layout()\nplt.savefig(f'detection_calibration.svg')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5aebeec260a378d5fc9814d5c9f3a07a3702ed | 790,953 | ipynb | Jupyter Notebook | old/0.2/training-on-BindingDB-CNN-basic.ipynb | VeaLi/MLT-LE | 8a31b0f74d6c11b07ace3960b1ab597d0edf6621 | [
"W3C",
"OLDAP-2.3"
] | 1 | 2022-03-17T11:56:47.000Z | 2022-03-17T11:56:47.000Z | old/0.2/training-on-BindingDB-CNN-basic.ipynb | VeaLi/MLT-LE | 8a31b0f74d6c11b07ace3960b1ab597d0edf6621 | [
"W3C",
"OLDAP-2.3"
] | null | null | null | old/0.2/training-on-BindingDB-CNN-basic.ipynb | VeaLi/MLT-LE | 8a31b0f74d6c11b07ace3960b1ab597d0edf6621 | [
"W3C",
"OLDAP-2.3"
] | null | null | null | 255.971845 | 202,788 | 0.906241 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"></ul></div>",
"_____no_output_____"
]
],
[
[
"!pip install tensorflow-addons",
"Collecting tensorflow-addons\n Downloading tensorflow_addons-0.16.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)\n\u001b[K |████████████████████████████████| 1.1 MB 5.0 MB/s \n\u001b[?25hRequirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons) (2.7.1)\nInstalling collected packages: tensorflow-addons\nSuccessfully installed tensorflow-addons-0.16.1\n"
],
[
"!pip install lifelines\n!pip install scikit-plot",
"Collecting lifelines\n Downloading lifelines-0.27.0-py3-none-any.whl (349 kB)\n\u001b[K |████████████████████████████████| 349 kB 5.1 MB/s \n\u001b[?25hCollecting formulaic>=0.2.2\n Downloading formulaic-0.3.3-py3-none-any.whl (56 kB)\n\u001b[K |████████████████████████████████| 56 kB 2.2 MB/s \n\u001b[?25hRequirement already satisfied: pandas>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from lifelines) (1.3.5)\nRequirement already satisfied: matplotlib>=3.0 in /usr/local/lib/python3.7/dist-packages (from lifelines) (3.2.2)\nRequirement already satisfied: autograd>=1.3 in /usr/local/lib/python3.7/dist-packages (from lifelines) (1.3)\nRequirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from lifelines) (1.21.5)\nCollecting autograd-gamma>=0.3\n Downloading autograd-gamma-0.5.0.tar.gz (4.0 kB)\nRequirement already satisfied: scipy>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from lifelines) (1.4.1)\nRequirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.7/dist-packages (from autograd>=1.3->lifelines) (0.16.0)\nRequirement already satisfied: wrapt>=1.0 in /usr/local/lib/python3.7/dist-packages (from formulaic>=0.2.2->lifelines) (1.14.0)\nCollecting interface-meta<2.0.0,>=1.2.0\n Downloading interface_meta-1.3.0-py3-none-any.whl (14 kB)\nCollecting astor<0.8.0,>=0.7.0\n Downloading astor-0.7.1-py2.py3-none-any.whl (27 kB)\nCollecting scipy>=1.2.0\n Downloading scipy-1.7.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (38.1 MB)\n\u001b[K |████████████████████████████████| 38.1 MB 2.4 MB/s \n\u001b[?25hRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.0->lifelines) (2.8.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.0->lifelines) (0.11.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.0->lifelines) (1.4.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.0->lifelines) (3.0.7)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib>=3.0->lifelines) (3.10.0.2)\nRequirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.0.0->lifelines) (2018.9)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib>=3.0->lifelines) (1.15.0)\nBuilding wheels for collected packages: autograd-gamma\n Building wheel for autograd-gamma (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for autograd-gamma: filename=autograd_gamma-0.5.0-py3-none-any.whl size=4048 sha256=1d564276f20ce005cae7d13b07377265a7b1930125b827eddfad99452a564780\n Stored in directory: /root/.cache/pip/wheels/9f/01/ee/1331593abb5725ff7d8c1333aee93a50a1c29d6ddda9665c9f\nSuccessfully built autograd-gamma\nInstalling collected packages: scipy, interface-meta, astor, formulaic, autograd-gamma, lifelines\n Attempting uninstall: scipy\n Found existing installation: scipy 1.4.1\n Uninstalling scipy-1.4.1:\n Successfully uninstalled scipy-1.4.1\n Attempting uninstall: astor\n Found existing installation: astor 0.8.1\n Uninstalling astor-0.8.1:\n Successfully uninstalled astor-0.8.1\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ngoogle-colab 1.0.0 requires astor~=0.8.1, but you have astor 0.7.1 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed astor-0.7.1 autograd-gamma-0.5.0 formulaic-0.3.3 interface-meta-1.3.0 lifelines-0.27.0 scipy-1.7.3\n"
],
[
"import tensorflow as tf\nimport tensorflow_addons as tfa\n\nfrom tensorflow import keras\nfrom sklearn.model_selection import train_test_split\nfrom keras import backend as K\nfrom tensorflow.keras.layers import StringLookup\nfrom tqdm.keras import TqdmCallback\nfrom tqdm.auto import tqdm\nimport random\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nplt.style.use('default')\nplt.style.use('seaborn-white')\n\nmodels = tf.keras.models\nlayers = tf.keras.layers\npreprocessing = tf.keras.preprocessing\n\ntqdm.pandas()",
"A:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow_addons\\utils\\ensure_tf_install.py:37: UserWarning: You are currently using a nightly version of TensorFlow (2.9.0-dev20220310). \nTensorFlow Addons offers no support for the nightly versions of TensorFlow. Some things might work, some other might not. \nIf you encounter a bug, do not file an issue on GitHub.\n warnings.warn(\n"
],
[
"def mse_nan(y_true, y_pred):\n masked_true = tf.where(tf.math.is_nan(y_true), tf.zeros_like(y_true), y_true)\n masked_pred = tf.where(tf.math.is_nan(y_true), tf.zeros_like(y_true), y_pred)\n return K.mean(K.square(masked_pred - masked_true), axis=-1)",
"_____no_output_____"
],
[
"def get_optimizer():\n optimizer = tf.keras.optimizers.Adam()\n return optimizer\n\n\ndef get_model(num_shared=2, units=64, rate=0.3, loss_weights=None):\n\n sm = layers.Input(shape=(100, ), name='D_Inp')\n aa = layers.Input(shape=(1000, ), name='T_Inp')\n\n emsm0 = layers.Embedding(53,\n 128,\n trainable=True,\n name='D_Emb',\n mask_zero=True)(sm)\n emaa0 = layers.Embedding(22,\n 128,\n trainable=True,\n name='T_Emb',\n mask_zero=True)(aa)\n\n cnvsm1 = layers.Conv1D(32, 3, name='D_L1')(emsm0)\n cnvaa1 = layers.Conv1D(32, 3, name='T_L1')(emaa0)\n\n cnvsm2 = layers.Conv1D(64, 3, name='D_L2')(cnvsm1)\n cnvaa2 = layers.Conv1D(64, 3, name='T_L2')(cnvaa1)\n\n cnvsm3 = layers.Conv1D(96, 3, name='D_L3')(cnvsm2)\n cnvaa3 = layers.Conv1D(96, 3, name='T_L3')(cnvaa2)\n\n gmpsm = layers.GlobalMaxPool1D(name='D_Gmp')(cnvsm2)\n gmpaa = layers.GlobalMaxPool1D(name='T_Gmp')(cnvaa2)\n\n C1 = layers.concatenate([gmpsm, gmpaa], axis=-1, name='C1')\n\n S1 = layers.Dense(512, activation='relu', name='S1')(C1)\n S1 = layers.Dropout(rate)(S1)\n\n S2 = layers.Dense(512, activation='relu', name='S2')(S1)\n S2 = layers.Dropout(rate)(S2)\n\n S3 = layers.Dense(512, activation='relu', name='S3')(S2)\n S3 = layers.Dropout(rate)(S3)\n\n Kd = layers.Dense(units, activation='relu', name='S1_Kd')(S3)\n Kd = layers.Dropout(rate)(Kd)\n\n Ki = layers.Dense(units, activation='relu', name='S1_Ki')(S3)\n Ki = layers.Dropout(rate)(Ki)\n\n IC50 = layers.Dense(units, activation='relu', name='S1_IC50')(S3)\n IC50 = layers.Dropout(rate)(IC50)\n\n EC50 = layers.Dense(units, activation='relu', name='S1_EC50')(S3)\n EC50 = layers.Dropout(rate)(EC50)\n\n IA = layers.Dense(units, activation='relu', name='S1_IA')(S3)\n IA = layers.Dropout(rate)(IA)\n\n pH = layers.Dense(units, activation='relu', name='S1_pH')(S3)\n pH = layers.Dropout(rate)(pH)\n\n out1 = layers.Dense(1, activation='linear', name='Kd')(Kd)\n out2 = layers.Dense(1, activation='linear', name='Ki')(Ki)\n out3 = layers.Dense(1, activation='linear', name='IC50')(IC50)\n out4 = layers.Dense(1, activation='linear', name='EC50')(EC50)\n out5 = layers.Dense(1, activation='sigmoid', name='IA')(IA)\n out6 = layers.Dense(1, activation='linear', name='pH')(pH)\n\n model = models.Model(inputs=[sm, aa],\n outputs=[out1, out2, out3, out4, out5, out6])\n\n losses = {\n \"Kd\": mse_nan,\n \"Ki\": mse_nan,\n \"IC50\": mse_nan,\n \"EC50\": mse_nan,\n \"pH\": mse_nan,\n \"IA\": \"binary_crossentropy\",\n }\n\n metrics = {\"IA\": tf.keras.metrics.AUC()}\n\n model.compile(loss=losses, optimizer=get_optimizer(), metrics=metrics, loss_weights=loss_weights)\n model.summary()\n return model",
"_____no_output_____"
],
[
"tf.keras.backend.clear_session()\nnp.random.seed(7)\ntf.random.set_seed(7)\n\nloss_weights = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]\n\nmodel = get_model(rate=0.3, loss_weights=loss_weights)",
"Model: \"model\"\n__________________________________________________________________________________________________\n Layer (type) Output Shape Param # Connected to \n==================================================================================================\n D_Inp (InputLayer) [(None, 100)] 0 [] \n \n T_Inp (InputLayer) [(None, 1000)] 0 [] \n \n D_Emb (Embedding) (None, 100, 128) 6784 ['D_Inp[0][0]'] \n \n T_Emb (Embedding) (None, 1000, 128) 2816 ['T_Inp[0][0]'] \n \n D_L1 (Conv1D) (None, 98, 32) 12320 ['D_Emb[0][0]'] \n \n T_L1 (Conv1D) (None, 998, 32) 12320 ['T_Emb[0][0]'] \n \n D_L2 (Conv1D) (None, 96, 64) 6208 ['D_L1[0][0]'] \n \n T_L2 (Conv1D) (None, 996, 64) 6208 ['T_L1[0][0]'] \n \n D_Gmp (GlobalMaxPooling1D) (None, 64) 0 ['D_L2[0][0]'] \n \n T_Gmp (GlobalMaxPooling1D) (None, 64) 0 ['T_L2[0][0]'] \n \n C1 (Concatenate) (None, 128) 0 ['D_Gmp[0][0]', \n 'T_Gmp[0][0]'] \n \n S1 (Dense) (None, 512) 66048 ['C1[0][0]'] \n \n dropout (Dropout) (None, 512) 0 ['S1[0][0]'] \n \n S2 (Dense) (None, 512) 262656 ['dropout[0][0]'] \n \n dropout_1 (Dropout) (None, 512) 0 ['S2[0][0]'] \n \n S3 (Dense) (None, 512) 262656 ['dropout_1[0][0]'] \n \n dropout_2 (Dropout) (None, 512) 0 ['S3[0][0]'] \n \n S1_Kd (Dense) (None, 64) 32832 ['dropout_2[0][0]'] \n \n S1_Ki (Dense) (None, 64) 32832 ['dropout_2[0][0]'] \n \n S1_IC50 (Dense) (None, 64) 32832 ['dropout_2[0][0]'] \n \n S1_EC50 (Dense) (None, 64) 32832 ['dropout_2[0][0]'] \n \n S1_IA (Dense) (None, 64) 32832 ['dropout_2[0][0]'] \n \n S1_pH (Dense) (None, 64) 32832 ['dropout_2[0][0]'] \n \n dropout_3 (Dropout) (None, 64) 0 ['S1_Kd[0][0]'] \n \n dropout_4 (Dropout) (None, 64) 0 ['S1_Ki[0][0]'] \n \n dropout_5 (Dropout) (None, 64) 0 ['S1_IC50[0][0]'] \n \n dropout_6 (Dropout) (None, 64) 0 ['S1_EC50[0][0]'] \n \n dropout_7 (Dropout) (None, 64) 0 ['S1_IA[0][0]'] \n \n dropout_8 (Dropout) (None, 64) 0 ['S1_pH[0][0]'] \n \n Kd (Dense) (None, 1) 65 ['dropout_3[0][0]'] \n \n Ki (Dense) (None, 1) 65 ['dropout_4[0][0]'] \n \n IC50 (Dense) (None, 1) 65 ['dropout_5[0][0]'] \n \n EC50 (Dense) (None, 1) 65 ['dropout_6[0][0]'] \n \n IA (Dense) (None, 1) 65 ['dropout_7[0][0]'] \n \n pH (Dense) (None, 1) 65 ['dropout_8[0][0]'] \n \n==================================================================================================\nTotal params: 835,398\nTrainable params: 835,398\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"tf.keras.utils.plot_model(model, rankdir='LR',\n show_shapes=True,\n show_layer_activations=True)",
"_____no_output_____"
],
[
"CHARPROTSET = dict([('A', 1), ('G', 2), ('L', 3), ('M', 4), ('S', 5), ('T', 6),\n ('E', 7), ('Q', 8), ('P', 9), ('F', 10), ('R', 11),\n ('V', 12), ('D', 13), ('I', 14), ('N', 15), ('Y', 16),\n ('H', 17), ('C', 18), ('K', 19), ('W', 20), ('X', 21)])\n\nCHARCANSMISET = dict([(')', 1), ('(', 2), ('1', 3), ('C', 4), ('c', 5),\n ('O', 6), ('2', 7), ('N', 8), ('=', 9), ('n', 10),\n ('3', 11), ('-', 12), ('4', 13), ('F', 14), ('S', 15),\n ('[', 16), (']', 17), ('l', 18), ('H', 19), ('s', 20),\n ('#', 21), ('o', 22), ('5', 23), ('B', 24), ('r', 25),\n ('+', 26), ('6', 27), ('P', 28), ('.', 29), ('I', 30),\n ('7', 31), ('e', 32), ('i', 33), ('a', 34), ('8', 35),\n ('K', 36), ('A', 37), ('9', 38), ('T', 39), ('g', 40),\n ('R', 41), ('Z', 42), ('%', 43), ('0', 44), ('u', 45),\n ('V', 46), ('b', 47), ('t', 48), ('L', 49), ('*', 50),\n ('d', 51), ('W', 52)])",
"_____no_output_____"
],
[
"class Gen:\n def __init__(self,\n data,\n map_smiles,\n map_aa,\n shuffle=True,\n test_only=False,\n len_drug=100,\n len_target=1000,\n window=False):\n self.data = data\n self.map_smiles = map_smiles\n self.map_aa = map_aa\n self.shuffle = shuffle\n self.test_only = test_only\n self.len_drug = len_drug\n self.len_target = len_target\n self.size = self.data.shape[0]\n self.inds = list(range(self.size))\n if self.shuffle:\n random.shuffle(self.inds)\n\n self.window = window\n\n self.gen = self._get_inputs()\n\n def _get_inputs(self):\n seen = 0\n while seen < self.size:\n ind = self.inds[seen]\n sample = self.data.iloc[ind, :].values.tolist()\n sample[0] = self.map_smiles[sample[0]]\n sample[1] = self.map_aa[sample[1]]\n\n if self.window:\n ld = max(0, (len(sample[0]) - self.len_drug))\n lt = max(0, (len(sample[1]) - self.len_target))\n dstart = random.randint(0, ld)\n tstart = random.randint(0, lt)\n\n sample[0] = sample[0][dstart:dstart + self.len_drug]\n sample[1] = sample[1][tstart:dstart + self.len_target]\n\n yield sample\n seen += 1\n if seen == self.size:\n if self.shuffle:\n random.shuffle(self.inds)\n seen = 0\n\n def get_batch(self, batch_size):\n while True:\n BATCH = []\n for _ in range(batch_size):\n sample = next(self.gen)\n for k, value in enumerate(sample):\n if len(BATCH) < (k+1):\n BATCH.append([])\n BATCH[k].append(value)\n\n BATCH[0] = preprocessing.sequence.pad_sequences(BATCH[0], self.len_drug)\n BATCH[1] = preprocessing.sequence.pad_sequences(BATCH[1], self.len_target)\n \n for k in range(2, len(BATCH)):\n BATCH[k] = np.array(BATCH[k]).flatten()\n\n if not self.test_only:\n yield [BATCH[0], BATCH[1]], [BATCH[k] for k in range(2, len(BATCH))]\n else:\n yield [BATCH[0], BATCH[1]], [BATCH[k]*0 for k in range(2, len(BATCH))]",
"_____no_output_____"
],
[
"data = pd.read_csv(\"data_full_05_pH.zip\", compression='zip')\norder = [\n 'smiles', 'target', 'p1Kd', 'p1Ki', 'p1IC50', 'p1EC50', 'is_active', 'pH'\n]\ndata = data[order]",
"_____no_output_____"
],
[
"data = data.sample(frac=1, random_state = 7)\ndata.head()",
"_____no_output_____"
],
[
"data.dropna().shape",
"_____no_output_____"
],
[
"SMILES = {}\nfor smiles in tqdm(data['smiles'].unique()):\n SMILES[smiles] = [CHARCANSMISET[s] for s in smiles]\n\nAA = {}\nfor aa in tqdm(data['target'].unique()):\n AA[aa] = [CHARPROTSET[a.upper()] for a in aa]",
"_____no_output_____"
],
[
"X_train, X_test = train_test_split(data, test_size=0.1, shuffle=True, random_state = 7, stratify=data['is_active'])\nX_train, X_valid = train_test_split(X_train, test_size=0.1, shuffle=True, random_state = 7, stratify=X_train['is_active'])\n\nX_train.shape[0], X_test.shape[0], X_valid.shape[0]",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"batch_size = 128\n\ntrg = Gen(X_train, SMILES, AA)\ntrg = trg.get_batch(batch_size)\n\nvag = Gen(X_valid, SMILES, AA)\nvag = vag.get_batch(batch_size)",
"_____no_output_____"
],
[
"# for batch in trg:\n# break\n\n# batch",
"_____no_output_____"
],
[
"steps_per_epoch = X_train.shape[0] // batch_size\nvalid_steps = X_valid.shape[0] // batch_size",
"_____no_output_____"
],
[
"filepath = \"{epoch:02d}-{val_loss:.2f}.h5\"\ncheckpoint = tf.keras.callbacks.ModelCheckpoint(filepath,\n monitor='val_loss',\n verbose=1,\n save_best_only=False,\n mode='auto',\n save_weights_only=True)",
"_____no_output_____"
],
[
"history = model.fit(trg,\n validation_data=vag,\n steps_per_epoch=steps_per_epoch,\n validation_steps=valid_steps,\n verbose=0,\n callbacks=[TqdmCallback(), checkpoint],\n epochs=50)",
"_____no_output_____"
],
[
"model.load_weights('45-5.30.h5')",
"_____no_output_____"
],
[
"# !rm *.h5 -r",
"_____no_output_____"
],
[
"# history.history",
"_____no_output_____"
],
[
"plt.plot(history.history['loss'], label='train')\nplt.plot(history.history['val_loss'], label='valid')\nplt.xlabel('Epoch')\nplt.title('Loss on train-valid subsets')\nplt.legend()",
"_____no_output_____"
],
[
"def get_batch_size(S):\n mbs = 1\n for i in range(1, min(64, S)):\n if S % i == 0:\n mbs = i\n assert S % mbs == 0\n\n return mbs",
"_____no_output_____"
],
[
"mbs = get_batch_size(X_test.shape[0])\nmbs",
"_____no_output_____"
],
[
"teg = Gen(X_test, SMILES, AA, shuffle=False, test_only=True)\nteg = teg.get_batch(mbs)",
"_____no_output_____"
],
[
"prediction = model.predict(teg, steps=X_test.shape[0]//mbs, verbose=1)",
"2210/2210 [==============================] - 78s 34ms/step\n"
],
[
"from sklearn.metrics import mean_squared_error\nfrom lifelines.utils import concordance_index\nfrom scipy import stats",
"_____no_output_____"
],
[
"def get_scores(y_true, y_pred):\n mse = np.round(mean_squared_error(y_true, y_pred), 3)\n rmse = np.round(mse**0.5, 3)\n ci = np.round(concordance_index(y_true, y_pred), 3)\n pearson = np.round(stats.pearsonr(y_true, y_pred)[0], 3)\n spearman = np.round(stats.spearmanr(y_true, y_pred)[0], 3)\n\n res = f\"rmse={rmse}, mse={mse},\\npearson={pearson}, spearman={spearman},\\nci={ci}\"\n\n return res",
"_____no_output_____"
],
[
"for k, col in enumerate(\n ['p1Kd', 'p1Ki', 'p1IC50', 'p1EC50', 'is_active', 'pH']):\n plt.scatter(X_test[col], prediction[k], alpha=0.7, c='k')\n plt.xlabel('true')\n plt.ylabel('predicted')\n y_true = X_test[col][X_test[col].notna()]\n y_pred = prediction[k][X_test[col].notna()].ravel()\n plt.title(col + \":\\n\" + get_scores(y_true, y_pred))\n plt.show() # 74.6",
"_____no_output_____"
],
[
"import scikitplot as skplt",
"_____no_output_____"
],
[
"p = prediction[-2].ravel().tolist()\nprobas = np.zeros((len(p),2))\nprobas[:,1] = p\nprobas[:,0] = 1\nprobas[:,0] = probas[:,0] - p\nskplt.metrics.plot_roc_curve(X_test['is_active'].values.ravel().tolist(), probas)\nplt.show()",
"A:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:87: FutureWarning: Function plot_roc_curve is deprecated; This will be removed in v0.5.0. Please use scikitplot.metrics.plot_roc instead.\n warnings.warn(msg, category=FutureWarning)\n"
],
[
"plt.hist(prediction[-2].ravel(), bins=32, edgecolor='w', color='k', alpha=0.7);",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5aed1d337b5d0179a9f05161ff5e685115f1e2 | 5,035 | ipynb | Jupyter Notebook | 01_Analyzing_Patient_Data.ipynb | epfllibrary/2019-06-04-epfl-swc-python1-exercices | 1173942832165d053566dafae0464cafc9837be6 | [
"CC-BY-4.0"
] | null | null | null | 01_Analyzing_Patient_Data.ipynb | epfllibrary/2019-06-04-epfl-swc-python1-exercices | 1173942832165d053566dafae0464cafc9837be6 | [
"CC-BY-4.0"
] | null | null | null | 01_Analyzing_Patient_Data.ipynb | epfllibrary/2019-06-04-epfl-swc-python1-exercices | 1173942832165d053566dafae0464cafc9837be6 | [
"CC-BY-4.0"
] | null | null | null | 21.425532 | 211 | 0.528699 | [
[
[
"\n# I. Analyzing Patient Data\n",
"_____no_output_____"
],
[
"## Teaching",
"_____no_output_____"
],
[
"## Exercices\n\n### Check Your Understanding\nWhat values do the variables mass and age have after each statement in the following program? Test your answers by executing the commands.\n\n\n mass = 47.5\n age = 122\n mass = mass * 2.0\n age = age - 20\n print(mass, age)",
"_____no_output_____"
],
[
"### Slicing Strings\n\nA section of an array is called a slice. We can take slices of character strings as well:",
"_____no_output_____"
]
],
[
[
"element = 'oxygen'\nprint('first three characters:', element[0:3])\nprint('last three characters:', element[3:6])",
"first three characters: oxy\nlast three characters: gen\n"
]
],
[
[
"What is the value of ``element[:4]?`` What about ``element[4:]?`` Or ``element[:]?``",
"_____no_output_____"
],
[
"### Make Your Own Plot\n\nCreate a plot showing the standard deviation (``numpy.std``) of the inflammation data for each day across all patients.",
"_____no_output_____"
],
[
"### Change In Inflammation\n\nThis patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept.\n\nThe ``numpy.diff()`` function takes a NumPy array and returns the differences between two successive values along a specified axis. For example, a NumPy array that looks like this:",
"_____no_output_____"
]
],
[
[
"npdiff = numpy.array([ 0, 2, 5, 9, 14])",
"_____no_output_____"
]
],
[
[
"Calling ``numpy.diff(npdiff)`` would do the following calculations and put the answers in another array.\n[ 2 - 0, 5 - 2, 9 - 5, 14 - 9 ]",
"_____no_output_____"
]
],
[
[
"numpy.diff(npdiff)",
"_____no_output_____"
]
],
[
[
"Which axis would it make sense to use this function along?",
"_____no_output_____"
],
[
"If the shape of an individual data file is ``(60, 40)`` (60 rows and 40 columns), what would the shape of the array be after you run the ``diff()`` function and why?",
"_____no_output_____"
],
[
"How would you find the largest change in inflammation for each patient? Does it matter if the change in inflammation is an increase or a decrease?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
cb5af7c4ab605dba476f63e710d5dfba2ec0604a | 144,729 | ipynb | Jupyter Notebook | practica3/validation.ipynb | dcabezas98/IN | f61a751a2965f31b85249a95f970bd1dbcae01d0 | [
"MIT"
] | null | null | null | practica3/validation.ipynb | dcabezas98/IN | f61a751a2965f31b85249a95f970bd1dbcae01d0 | [
"MIT"
] | null | null | null | practica3/validation.ipynb | dcabezas98/IN | f61a751a2965f31b85249a95f970bd1dbcae01d0 | [
"MIT"
] | null | null | null | 30.151875 | 410 | 0.317151 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import cross_val_score\nfrom collections import Counter\n\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\n\nfrom sklearn.model_selection import GridSearchCV\n\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom lightgbm import LGBMClassifier\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.svm import SVC\n\nfrom xgboost import XGBClassifier\n\nfrom preproc3 import na, encode, split, binarize, shuffle_in_unison, scale\nfrom imblearn.over_sampling import SMOTE",
"_____no_output_____"
],
[
"# explicitly require this experimental feature\nfrom sklearn.experimental import enable_hist_gradient_boosting # noqa\n# now you can import normally from ensemble\nfrom sklearn.ensemble import HistGradientBoostingClassifier",
"_____no_output_____"
],
[
"DATA='ugrin2020-vehiculo-usado-multiclase/'\nTRAIN=DATA+'train.csv'\nTEST=DATA+'test.csv'\n\nPREPROCESSED_DATA='preprocessed_data/'\nRESULTS='results/'",
"_____no_output_____"
],
[
"train = pd.read_csv(TRAIN) # Cargo datos de entrenamiento\ntest = pd.read_csv(TEST) # Cargo datos de test\n\n# Eliminamos el campo id ya que no se debe usar para predecir\ntest_ids = test['id']\ndel test['id']\ndel train['id']\n\n# Cambiamos el nombre a la columna Año para poder manejarla correctamente\ntrain.rename(columns = {'Año':'Anio'}, inplace = True)\ntest.rename(columns = {'Año':'Anio'}, inplace = True)",
"_____no_output_____"
],
[
"train_label = train.Precio_cat\ndel train['Precio_cat']",
"_____no_output_____"
],
[
"train2, val, train2_label, val_label = train_test_split(train, train_label, stratify=train_label, test_size=0.25, random_state=42)",
"_____no_output_____"
],
[
"train2['Precio_cat']=train2_label",
"/usr/lib/python3/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"train2, val = na(train2, val)\nval['label']=val_label\nval=val[val.Combustible!='Electric']\nval=val.dropna()\n\nval_label=val.label\ndel val['label']\n\ntrain2, val = encode (train2, val)\ntrain2, train2_label, val = split(train2, val)\ntrain2, val = binarize(train2, val)\ntrain2, train2_label = SMOTE(random_state=25).fit_resample(train2, train2_label)\nshuffle_in_unison(train2, train2_label)\ntrain2, val = scale(train2, val)",
"/usr/lib/python3/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \n"
],
[
"#np.savez_compressed(PREPROCESSED_DATA+'binScale-val', train2, train2_label, val, val_label)",
"_____no_output_____"
],
[
"results=pd.DataFrame(columns=['iter','leaf','lr','acc'])",
"_____no_output_____"
],
[
"param_grid={'max_iter':[75,100,150,200,300], 'max_leaf_nodes':[27,29,31,33],'learning_rate':[0.08,0.1,0.12]}\nfor iters in param_grid['max_iter']:\n for leaf in param_grid['max_leaf_nodes']:\n for lr in param_grid['learning_rate']:\n print(iters, leaf, lr)\n model=HistGradientBoostingClassifier(max_iter=iters, max_leaf_nodes=leaf, learning_rate=lr)\n model.fit(train2, train2_label)\n results=results.append(pd.DataFrame([[iters, leaf, lr,accuracy_score(val_label,model.predict(val))]],columns=['iter','leaf','lr','acc']),ignore_index=True)",
"75 27 0.08\n75 27 0.1\n75 27 0.12\n75 29 0.08\n75 29 0.1\n75 29 0.12\n75 31 0.08\n75 31 0.1\n75 31 0.12\n75 33 0.08\n75 33 0.1\n75 33 0.12\n100 27 0.08\n100 27 0.1\n100 27 0.12\n100 29 0.08\n100 29 0.1\n100 29 0.12\n100 31 0.08\n100 31 0.1\n100 31 0.12\n100 33 0.08\n100 33 0.1\n100 33 0.12\n150 27 0.08\n150 27 0.1\n150 27 0.12\n150 29 0.08\n150 29 0.1\n150 29 0.12\n150 31 0.08\n150 31 0.1\n150 31 0.12\n150 33 0.08\n150 33 0.1\n150 33 0.12\n200 27 0.08\n200 27 0.1\n200 27 0.12\n200 29 0.08\n200 29 0.1\n200 29 0.12\n200 31 0.08\n200 31 0.1\n200 31 0.12\n200 33 0.08\n200 33 0.1\n200 33 0.12\n300 27 0.08\n300 27 0.1\n300 27 0.12\n300 29 0.08\n300 29 0.1\n300 29 0.12\n300 31 0.08\n300 31 0.1\n300 31 0.12\n300 33 0.08\n300 33 0.1\n300 33 0.12\n"
],
[
"with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n display(results.sort_values(by='acc',ascending=False))",
"_____no_output_____"
],
[
"model=XGBClassifier(n_jobs=4)",
"_____no_output_____"
],
[
"model",
"_____no_output_____"
],
[
"results=pd.DataFrame(columns=['n','d','acc'])",
"_____no_output_____"
],
[
"param_grid={'n_estimators':[75,100,150,200,300,400,500], 'max_depth':[3,8,14,26,None]}\nfor n in param_grid['n_estimators']:\n for d in param_grid['max_depth']:\n print(n, d)\n model= XGBClassifier(n_estimators=n, max_depth=d, n_jobs=4, eval_metric='mlogloss')\n model.fit(train2, train2_label)\n results=results.append(pd.DataFrame([[n,d,accuracy_score(val_label,model.predict(val))]],columns=['n','d','acc']),ignore_index=True)",
"75 3\n"
],
[
"with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n display(results.sort_values(by='acc',ascending=False))",
"_____no_output_____"
],
[
"results=pd.DataFrame(columns=['C','acc'])",
"_____no_output_____"
],
[
"param_grid={'C':[0.25,0.5,1,2.5,5,10,15,20,25,30,35,40,45,50,60,70]}\nfor c in param_grid['C']:\n print(c)\n model=SVC(C=c)\n model.fit(train2, train2_label)\n results=results.append(pd.DataFrame([[c,accuracy_score(val_label,model.predict(val))]],columns=['C','acc']),ignore_index=True)",
"0.25\n0.5\n1\n2.5\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n60\n70\n"
],
[
"with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n display(results.sort_values(by='acc',ascending=False))",
"_____no_output_____"
],
[
"results=pd.DataFrame(columns=['shape','early','alpha','acc'])",
"_____no_output_____"
],
[
"param_grid={'hidden_layer_sizes':[(50),(100),(150),(200),(250),(50,50),(100,100),(150,150),(200,200),(250,250)], 'early_stopping':[True,False],'alpha':[0.00005,0.0001,0.00015]}\nfor s in param_grid['hidden_layer_sizes']:\n for early in param_grid['early_stopping']:\n for a in param_grid['alpha']:\n print(s,early,a)\n model=MLPClassifier(hidden_layer_sizes=s,alpha=a,early_stopping=early,max_iter=1000)\n model.fit(train2, train2_label)\n results=results.append(pd.DataFrame([[s,early,a,accuracy_score(val_label,model.predict(val))]],columns=['shape','early','alpha','acc']),ignore_index=True)",
"50 True 5e-05\n50 True 0.0001\n50 True 0.00015\n50 False 5e-05\n50 False 0.0001\n50 False 0.00015\n100 True 5e-05\n100 True 0.0001\n100 True 0.00015\n100 False 5e-05\n100 False 0.0001\n100 False 0.00015\n150 True 5e-05\n150 True 0.0001\n150 True 0.00015\n150 False 5e-05\n150 False 0.0001\n150 False 0.00015\n200 True 5e-05\n200 True 0.0001\n200 True 0.00015\n200 False 5e-05\n200 False 0.0001\n200 False 0.00015\n250 True 5e-05\n250 True 0.0001\n250 True 0.00015\n250 False 5e-05\n250 False 0.0001\n250 False 0.00015\n(50, 50) True 5e-05\n(50, 50) True 0.0001\n(50, 50) True 0.00015\n(50, 50) False 5e-05\n(50, 50) False 0.0001\n(50, 50) False 0.00015\n(100, 100) True 5e-05\n(100, 100) True 0.0001\n(100, 100) True 0.00015\n(100, 100) False 5e-05\n(100, 100) False 0.0001\n(100, 100) False 0.00015\n(150, 150) True 5e-05\n(150, 150) True 0.0001\n(150, 150) True 0.00015\n(150, 150) False 5e-05\n(150, 150) False 0.0001\n(150, 150) False 0.00015\n(200, 200) True 5e-05\n(200, 200) True 0.0001\n(200, 200) True 0.00015\n(200, 200) False 5e-05\n(200, 200) False 0.0001\n(200, 200) False 0.00015\n(250, 250) True 5e-05\n(250, 250) True 0.0001\n(250, 250) True 0.00015\n(250, 250) False 5e-05\n(250, 250) False 0.0001\n(250, 250) False 0.00015\n"
],
[
"with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n display(results.sort_values(by='acc',ascending=False))",
"_____no_output_____"
],
[
"results=pd.DataFrame(columns=['n','lr','s','d','acc'])",
"_____no_output_____"
],
[
"param_grid={'n_estimators':[450,500,550,600], 'learning_rate':[0.1,0.125,0.15,0.175,0.2], 'subsample':[0.8,0.9], 'max_depth':[2,3,4]}\nfor n in param_grid['n_estimators'][3:4]:\n for lr in param_grid['learning_rate']:\n for s in param_grid['subsample']:\n for d in param_grid['max_depth']:\n print(n, lr, s, d)\n model= GradientBoostingClassifier(n_estimators=n, learning_rate=lr, subsample=s, max_depth=d)\n model.fit(train2, train2_label)\n results=results.append(pd.DataFrame([[n,lr,s,d,accuracy_score(val_label,model.predict(val))]],columns=['n','lr','s','d','acc']),ignore_index=True)",
"600 0.1 0.8 2\n600 0.1 0.8 3\n600 0.1 0.8 4\n600 0.1 0.9 2\n600 0.1 0.9 3\n600 0.1 0.9 4\n600 0.125 0.8 2\n600 0.125 0.8 3\n600 0.125 0.8 4\n600 0.125 0.9 2\n600 0.125 0.9 3\n600 0.125 0.9 4\n600 0.15 0.8 2\n600 0.15 0.8 3\n600 0.15 0.8 4\n600 0.15 0.9 2\n600 0.15 0.9 3\n600 0.15 0.9 4\n600 0.175 0.8 2\n600 0.175 0.8 3\n600 0.175 0.8 4\n600 0.175 0.9 2\n600 0.175 0.9 3\n600 0.175 0.9 4\n600 0.2 0.8 2\n600 0.2 0.8 3\n600 0.2 0.8 4\n600 0.2 0.9 2\n600 0.2 0.9 3\n600 0.2 0.9 4\n"
],
[
"with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n display(results.sort_values(by='acc',ascending=False))",
"_____no_output_____"
],
[
"results=pd.DataFrame(columns=['n','lr','leaves','d','acc'])",
"_____no_output_____"
],
[
"param_grid={'learning_rate':[0.07,0.08,0.1,0.12],'n_estimators':[125,150,200],'num_leaves':[25,27,29,31], 'max_depth':[3,8,-1]}\nfor n in param_grid['n_estimators']:\n for lr in param_grid['learning_rate']:\n for leaves in param_grid['num_leaves']:\n for d in param_grid['max_depth']:\n print(n, lr, leaves, d)\n model = LGBMClassifier(n_estimators=n, learning_rate=lr, num_leaves=leaves, max_depth=d)\n model.fit(train2, train2_label)\n results=results.append(pd.DataFrame([[n,lr,leaves,d,accuracy_score(val_label,model.predict(val))]],columns=['n','lr','leaves','d','acc']),ignore_index=True)",
"125 0.07 25 3\n125 0.07 25 8\n125 0.07 25 -1\n125 0.07 27 3\n125 0.07 27 8\n125 0.07 27 -1\n125 0.07 29 3\n125 0.07 29 8\n125 0.07 29 -1\n125 0.07 31 3\n125 0.07 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n125 0.07 31 -1\n125 0.08 25 3\n125 0.08 25 8\n125 0.08 25 -1\n125 0.08 27 3\n125 0.08 27 8\n125 0.08 27 -1\n125 0.08 29 3\n125 0.08 29 8\n125 0.08 29 -1\n125 0.08 31 3\n125 0.08 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n125 0.08 31 -1\n125 0.1 25 3\n125 0.1 25 8\n125 0.1 25 -1\n125 0.1 27 3\n125 0.1 27 8\n125 0.1 27 -1\n125 0.1 29 3\n125 0.1 29 8\n125 0.1 29 -1\n125 0.1 31 3\n125 0.1 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n125 0.1 31 -1\n125 0.12 25 3\n125 0.12 25 8\n125 0.12 25 -1\n125 0.12 27 3\n125 0.12 27 8\n125 0.12 27 -1\n125 0.12 29 3\n125 0.12 29 8\n125 0.12 29 -1\n125 0.12 31 3\n125 0.12 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n125 0.12 31 -1\n150 0.07 25 3\n150 0.07 25 8\n150 0.07 25 -1\n150 0.07 27 3\n150 0.07 27 8\n150 0.07 27 -1\n150 0.07 29 3\n150 0.07 29 8\n150 0.07 29 -1\n150 0.07 31 3\n150 0.07 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n150 0.07 31 -1\n150 0.08 25 3\n150 0.08 25 8\n150 0.08 25 -1\n150 0.08 27 3\n150 0.08 27 8\n150 0.08 27 -1\n150 0.08 29 3\n150 0.08 29 8\n150 0.08 29 -1\n150 0.08 31 3\n150 0.08 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n150 0.08 31 -1\n150 0.1 25 3\n150 0.1 25 8\n150 0.1 25 -1\n150 0.1 27 3\n150 0.1 27 8\n150 0.1 27 -1\n150 0.1 29 3\n150 0.1 29 8\n150 0.1 29 -1\n150 0.1 31 3\n150 0.1 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n150 0.1 31 -1\n150 0.12 25 3\n150 0.12 25 8\n150 0.12 25 -1\n150 0.12 27 3\n150 0.12 27 8\n150 0.12 27 -1\n150 0.12 29 3\n150 0.12 29 8\n150 0.12 29 -1\n150 0.12 31 3\n150 0.12 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n150 0.12 31 -1\n200 0.07 25 3\n200 0.07 25 8\n200 0.07 25 -1\n200 0.07 27 3\n200 0.07 27 8\n200 0.07 27 -1\n200 0.07 29 3\n200 0.07 29 8\n200 0.07 29 -1\n200 0.07 31 3\n200 0.07 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n200 0.07 31 -1\n200 0.08 25 3\n200 0.08 25 8\n200 0.08 25 -1\n200 0.08 27 3\n200 0.08 27 8\n200 0.08 27 -1\n200 0.08 29 3\n200 0.08 29 8\n200 0.08 29 -1\n200 0.08 31 3\n200 0.08 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n200 0.08 31 -1\n200 0.1 25 3\n200 0.1 25 8\n200 0.1 25 -1\n200 0.1 27 3\n200 0.1 27 8\n200 0.1 27 -1\n200 0.1 29 3\n200 0.1 29 8\n200 0.1 29 -1\n200 0.1 31 3\n200 0.1 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n200 0.1 31 -1\n200 0.12 25 3\n200 0.12 25 8\n200 0.12 25 -1\n200 0.12 27 3\n200 0.12 27 8\n200 0.12 27 -1\n200 0.12 29 3\n200 0.12 29 8\n200 0.12 29 -1\n200 0.12 31 3\n200 0.12 31 8\n[LightGBM] [Warning] Accuracy may be bad since you didn't explicitly set num_leaves OR 2^max_depth > num_leaves. (num_leaves=31).\n200 0.12 31 -1\n"
],
[
"with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n display(results.sort_values(by='acc',ascending=False))",
"_____no_output_____"
],
[
"model.fit(train2,train2_label)\npred=model.predict(val)\naccuracy_score(val_label,pred)",
"_____no_output_____"
],
[
"scores=cross_val_score(model, train, label, cv=5)\nprint(scores)\nprint(np.mean(scores))",
"[0.90575342 0.91506849 0.91123288 0.92767123 0.91178082]\n0.9143013698630137\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5aff0f593c5b1992e4aa577b410eb631e19bbb | 5,865 | ipynb | Jupyter Notebook | final/cmd_control.ipynb | BobAnkh/THUEE_ROBOTS | 2a302c847058a8d80d83b70b1670e1ffb6de8c57 | [
"Apache-2.0"
] | 1 | 2021-05-29T04:36:39.000Z | 2021-05-29T04:36:39.000Z | final/cmd_control.ipynb | BobAnkh/THUEE_ROBOTS | 2a302c847058a8d80d83b70b1670e1ffb6de8c57 | [
"Apache-2.0"
] | null | null | null | final/cmd_control.ipynb | BobAnkh/THUEE_ROBOTS | 2a302c847058a8d80d83b70b1670e1ffb6de8c57 | [
"Apache-2.0"
] | null | null | null | 26.418919 | 519 | 0.485251 | [
[
[
"import binascii\nimport serial\nimport os",
"_____no_output_____"
],
[
"os.system('sh ./stop_sys_ttyPS0.sh')",
"_____no_output_____"
],
[
"def run_action(cmd):\n ser = serial.Serial(\"/dev/ttyPS0\", 9600, timeout=5)\n cnt_err = 0\n while 1:\n test_read = ser.read()\n print('test_read', test_read)\n cnt_err += 1\n if test_read== b'\\xa3' or cnt_err == 50:\n break\n \n if cnt_err == 50:\n print('can not get REQ')\n else:\n print('read REQ finished!')\n ser.write(cmd2data(cmd))\n print('send action ok!')\n ser.close()",
"_____no_output_____"
],
[
"def crc_calculate(package):\n crc = 0\n for hex_data in package:\n\n b2 = hex_data.to_bytes(1, byteorder='little')\n crc = binascii.crc_hqx(b2, crc)\n\n return [(crc >> 8), (crc & 255)] # 校验位两位",
"_____no_output_____"
],
[
"def cmd2data(cmd):\n cnt=0\n cmd_list=[]\n for i in cmd:\n cnt+=1\n cmd_list+=[ord(i)]\n cmd_list=[0xff,0xff]+[(cnt+5)>>8,(cnt+5)&255]+[0x01,(cnt+1)&255,0x03]+cmd_list\n cmd_list=cmd_list+crc_calculate(cmd_list)\n return cmd_list",
"_____no_output_____"
],
[
"def wait_req():\n ser = serial.Serial(\"/dev/ttyPS0\", 9600, timeout=5)\n while 1:\n test_read=ser.read()\n if test_read== b'\\xa3' :\n print('read REQ finished!') \n break\n ",
"_____no_output_____"
],
[
"run_action('XiaDun')\nwait_req()\nrun_action('Stand')",
"test_read b'\\xff'\ntest_read b'\\xff'\ntest_read b'\\x00'\ntest_read b'\\x0c'\ntest_read b'\\x04'\ntest_read b'R'\ntest_read b'E'\ntest_read b'Q'\ntest_read b'\\x00'\ntest_read b'\\x01'\ntest_read b'\\x03'\ntest_read b'\\x02'\ntest_read b'\\x00'\ntest_read b'\\x00'\ntest_read b'\\x8e'\ntest_read b'\\xa3'\nread REQ finished!\nsend action ok!\nread REQ finished!\ntest_read b'\\xff'\ntest_read b'\\xff'\ntest_read b'\\x00'\ntest_read b'\\x0e'\ntest_read b'\\x06'\ntest_read b'E'\ntest_read b'R'\ntest_read b'R'\ntest_read b'O'\ntest_read b'R'\ntest_read b'\\x00'\ntest_read b'\\x01'\ntest_read b'\\x03'\ntest_read b'\\x02'\ntest_read b'\\x00'\ntest_read b'\\x00'\ntest_read b' '\ntest_read b'\\x13'\ntest_read b'\\xff'\ntest_read b'\\xff'\ntest_read b'\\x00'\ntest_read b'\\x0c'\ntest_read b'\\x04'\ntest_read b'R'\ntest_read b'E'\ntest_read b'Q'\ntest_read b'\\x00'\ntest_read b'\\x01'\ntest_read b'\\x03'\ntest_read b'\\x02'\ntest_read b'\\x00'\ntest_read b'\\x00'\ntest_read b'\\x8e'\ntest_read b'\\xa3'\nread REQ finished!\nsend action ok!\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5b0fb67674d5af364ce699779dbfd8a800bd89 | 11,380 | ipynb | Jupyter Notebook | sagemaker_neo_compilation_jobs/tensorflow_distributed_mnist/tensorflow_distributed_mnist_neo.ipynb | sureindia-in/sagemaker-examples-good | be72218d4fd75bdb7f0df136026366fcd726fbe0 | [
"Apache-2.0"
] | 1 | 2021-06-21T12:48:16.000Z | 2021-06-21T12:48:16.000Z | sagemaker_neo_compilation_jobs/tensorflow_distributed_mnist/tensorflow_distributed_mnist_neo.ipynb | sureindia-in/sagemaker-examples-good | be72218d4fd75bdb7f0df136026366fcd726fbe0 | [
"Apache-2.0"
] | 1 | 2019-07-01T23:54:20.000Z | 2019-07-01T23:55:29.000Z | sagemaker_neo_compilation_jobs/tensorflow_distributed_mnist/tensorflow_distributed_mnist_neo.ipynb | sureindia-in/sagemaker-examples-good | be72218d4fd75bdb7f0df136026366fcd726fbe0 | [
"Apache-2.0"
] | 2 | 2021-06-24T11:49:58.000Z | 2021-06-24T11:54:01.000Z | 32.890173 | 817 | 0.61775 | [
[
[
"# TensorFlow BYOM: Train with Custom Training Script, Compile with Neo, and Deploy on SageMaker\n\nIn this notebook you will compile a trained model using Amazon SageMaker Neo. This notebook is similar to the [TensorFlow MNIST training and serving notebook](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_script_mode_training_and_serving/tensorflow_script_mode_training_and_serving.ipynb) in terms of its functionality. You will complete the same classification task, however this time you will compile the trained model using the SageMaker Neo API on the backend. SageMaker Neo will optimize your model to run on your choice of hardware. At the end of this notebook you will setup a real-time hosting endpoint in SageMaker for your SageMaker Neo compiled model using the TensorFlow Model Server. Note: This notebooks requires Sagemaker Python SDK v2.x.x or above.",
"_____no_output_____"
],
[
"### Set up the environment",
"_____no_output_____"
]
],
[
[
"import os\nimport sagemaker\nfrom sagemaker import get_execution_role\n\nsagemaker_session = sagemaker.Session()\n\nrole = get_execution_role()",
"_____no_output_____"
]
],
[
[
"### Download the MNIST dataset",
"_____no_output_____"
]
],
[
[
"import utils\nfrom tensorflow.contrib.learn.python.learn.datasets import mnist\nimport tensorflow as tf\n\ndata_sets = mnist.read_data_sets(\"data\", dtype=tf.uint8, reshape=False, validation_size=5000)\n\nutils.convert_to(data_sets.train, \"train\", \"data\")\nutils.convert_to(data_sets.validation, \"validation\", \"data\")\nutils.convert_to(data_sets.test, \"test\", \"data\")",
"_____no_output_____"
]
],
[
[
"### Upload the data\nWe use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.",
"_____no_output_____"
]
],
[
[
"inputs = sagemaker_session.upload_data(path=\"data\", key_prefix=\"data/DEMO-mnist\")",
"_____no_output_____"
]
],
[
[
"# Construct a script for distributed training \nHere is the full code for the network model:",
"_____no_output_____"
]
],
[
[
"!cat 'mnist.py'",
"_____no_output_____"
]
],
[
[
"The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/mnist_main.py). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference. See [TensorFlow MNIST training and serving notebook](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_script_mode_training_and_serving/tensorflow_script_mode_training_and_serving.ipynb) for more details about the training script.",
"_____no_output_____"
],
[
"## Create a training job using the sagemaker.TensorFlow estimator",
"_____no_output_____"
]
],
[
[
"from sagemaker.tensorflow import TensorFlow\n\nmnist_estimator = TensorFlow(\n entry_point=\"mnist.py\",\n role=role,\n framework_version=\"1.15.3\",\n py_version=\"py3\",\n training_steps=1000,\n evaluation_steps=100,\n instance_count=2,\n instance_type=\"ml.c4.xlarge\",\n)\n\nmnist_estimator.fit(inputs)",
"_____no_output_____"
]
],
[
[
"The **```fit```** method will create a training job in two **ml.c4.xlarge** instances. The logs above will show the instances doing training, evaluation, and incrementing the number of **training steps**. \n\nIn the end of the training, the training job will generate a saved model for TF serving.",
"_____no_output_____"
],
[
"# Deploy the trained model to prepare for predictions (the old way)\n\nThe deploy() method creates an endpoint which serves prediction requests in real-time.",
"_____no_output_____"
]
],
[
[
"mnist_predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type=\"ml.m4.xlarge\")",
"_____no_output_____"
]
],
[
[
"## Invoking the endpoint",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport json\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\n\nfor i in range(10):\n data = mnist.test.images[i].tolist()\n # Follow https://www.tensorflow.org/tfx/serving/api_rest guide to format input to the model server\n predict_response = mnist_predictor.predict({\"instances\": np.asarray(data).tolist()})\n\n print(\"========================================\")\n label = np.argmax(mnist.test.labels[i])\n print(\"label is {}\".format(label))\n prediction = np.argmax(predict_response[\"predictions\"])\n print(\"prediction is {}\".format(prediction))",
"_____no_output_____"
]
],
[
[
"## Deleting the endpoint",
"_____no_output_____"
]
],
[
[
"sagemaker.Session().delete_endpoint(mnist_predictor.endpoint)",
"_____no_output_____"
]
],
[
[
"# Deploy the trained model using Neo\n\nNow the model is ready to be compiled by Neo to be optimized for our hardware of choice. We are using the ``TensorFlowEstimator.compile_model`` method to do this. For this example, our target hardware is ``'ml_c5'``. You can changed these to other supported target hardware if you prefer.\n\n## Compiling the model\nThe ``input_shape`` is the definition for the model's input tensor and ``output_path`` is where the compiled model will be stored in S3. **Important. If the following command result in a permission error, scroll up and locate the value of execution role returned by `get_execution_role()`. The role must have access to the S3 bucket specified in ``output_path``.**",
"_____no_output_____"
]
],
[
[
"output_path = \"/\".join(mnist_estimator.output_path.split(\"/\")[:-1])\noptimized_estimator = mnist_estimator.compile_model(\n target_instance_family=\"ml_c5\",\n input_shape={\"data\": [1, 784]}, # Batch size 1, 3 channels, 224x224 Images.\n output_path=output_path,\n framework=\"tensorflow\",\n framework_version=\"1.15.3\",\n)",
"_____no_output_____"
]
],
[
[
"## Set image uri (Temporarily required)\nImage URI: aws_account_id.dkr.ecr.aws_region.amazonaws.com/sagemaker-inference-tensorflow:1.15.3-instance_type-py3\n\nRefer to the table on the bottom [here](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-deployment-hosting-services-container-images.html) to get aws account id and region mapping",
"_____no_output_____"
]
],
[
[
"optimized_estimator.image_uri = (\n \"301217895009.dkr.ecr.us-west-2.amazonaws.com/sagemaker-inference-tensorflow:1.15.3-cpu-py3\"\n)",
"_____no_output_____"
]
],
[
[
"## Deploying the compiled model",
"_____no_output_____"
]
],
[
[
"optimized_predictor = optimized_estimator.deploy(\n initial_instance_count=1, instance_type=\"ml.c5.xlarge\"\n)",
"_____no_output_____"
]
],
[
[
"## Invoking the endpoint",
"_____no_output_____"
]
],
[
[
"from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\n\nfor i in range(10):\n data = mnist.test.images[i].tolist()\n # Follow https://www.tensorflow.org/tfx/serving/api_rest guide to format input to the model server\n predict_response = optimized_predictor.predict({\"instances\": np.asarray(data).tolist()})\n\n print(\"========================================\")\n label = np.argmax(mnist.test.labels[i])\n print(\"label is {}\".format(label))\n prediction = np.argmax(predict_response[\"predictions\"])\n print(\"prediction is {}\".format(prediction))",
"_____no_output_____"
]
],
[
[
"## Deleting endpoint",
"_____no_output_____"
]
],
[
[
"sagemaker.Session().delete_endpoint(optimized_predictor.endpoint)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb5b134b452daf7e04de619adb7d3346673fac4d | 3,123 | ipynb | Jupyter Notebook | EmbeddingLayer.ipynb | SolbiChoi/test_deeplearning | c1fb02bc0e14235c4696b4386c469603cc0c682c | [
"Apache-2.0"
] | null | null | null | EmbeddingLayer.ipynb | SolbiChoi/test_deeplearning | c1fb02bc0e14235c4696b4386c469603cc0c682c | [
"Apache-2.0"
] | null | null | null | EmbeddingLayer.ipynb | SolbiChoi/test_deeplearning | c1fb02bc0e14235c4696b4386c469603cc0c682c | [
"Apache-2.0"
] | null | null | null | 23.839695 | 240 | 0.430355 | [
[
[
"<a href=\"https://colab.research.google.com/github/SolbiChoi/test_deeplearning/blob/master/EmbeddingLayer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"import numpy as np\nx_data = np.array([[4],[20]])\nx_data, x_data.shape",
"_____no_output_____"
],
[
"model = tf.keras.Sequential()\n\nmodel.add(tf.keras.layers.Embedding(50,2,input_length=1)) # input layer , 앞에 두개 차원에 대한 얘기\n# model.add() # hidden layer\n# model.add() # output layer\n\nmodel.compile(optimizer='adam', loss='mse')",
"_____no_output_____"
],
[
"model.predict(x_data)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb5b20decaae39b669a38fa6aa3ef425de3ab50e | 1,796 | ipynb | Jupyter Notebook | Python week 10/Jupyter version/Class_Project_1.ipynb | clementshade/CSC101 | 997cad11df66d5b56f2a76cd431a406ae75c1b7a | [
"MIT"
] | null | null | null | Python week 10/Jupyter version/Class_Project_1.ipynb | clementshade/CSC101 | 997cad11df66d5b56f2a76cd431a406ae75c1b7a | [
"MIT"
] | null | null | null | Python week 10/Jupyter version/Class_Project_1.ipynb | clementshade/CSC101 | 997cad11df66d5b56f2a76cd431a406ae75c1b7a | [
"MIT"
] | null | null | null | 22.45 | 54 | 0.489421 | [
[
[
"#Input name and ages\n\n#Person one data\nname1 = str(input('Name of first person: '))\nprint(\"Hello, how old are u\",name1)\nage1 = input(\"...\")\nprint(name1,\"is\", age1,\"years old\")\n\n#person two data\nname2 = str(input('Name of second person: '))\nprint(\"Hello, how old are u\",name2)\nage2 = input(\"...\")\nprint(name2,\"is\", age2,\"years old\")\n\ninput(\"Now im going to swap ages\")\n#switching variables\nprint(name1,\"is\", age2,\"years old\")\nprint(name2,\"is\", age1,\"years old\")\n\ninput(\"press any key to exit...\")\n\n",
"Name of first person: f\nHello, how old are u f\n...1\nf is 1 years old\nName of second person: g\nHello, how old are u g\n...2\ng is 2 years old\nNow im going to swap ages\nf is 2 years old\ng is 1 years old\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
cb5b25a417ec61281720a6562c2675574be21fda | 6,904 | ipynb | Jupyter Notebook | 11_introduccion_a_d3.ipynb | Cloudevel/cd221 | c8ad31a9dde7803ae8679bec7ef09b9e0ca4461f | [
"MIT"
] | null | null | null | 11_introduccion_a_d3.ipynb | Cloudevel/cd221 | c8ad31a9dde7803ae8679bec7ef09b9e0ca4461f | [
"MIT"
] | null | null | null | 11_introduccion_a_d3.ipynb | Cloudevel/cd221 | c8ad31a9dde7803ae8679bec7ef09b9e0ca4461f | [
"MIT"
] | 1 | 2020-09-06T06:07:05.000Z | 2020-09-06T06:07:05.000Z | 30.821429 | 405 | 0.540556 | [
[
[
"[](https://pythonista.io)",
"_____no_output_____"
],
[
"[*D3.js*](https://d3js.org/) es una biblioteca de Javascript especializada en la creación de documentos orientados a datos (Data Driven Documents) capaz de acceder a los recursos de un documento HTML mediante selecciones.\n\n*D3.js* no contiene herramientas específicas para crear gráficos, pero es capaz de acceder a los estilos de un elemento, así como de crear y modificar elementos SVG y Canvas.",
"_____no_output_____"
],
[
"## Inclusión de *D3.js* en un documento HTML.\n\nExisten varias formas de acceder a la biblioteca, dependiendo del estilo de programación.\n\nEs posible acceder a la documentación de *D3.js* en la siguiente liga:\n\nhttps://github.com/d3/d3\n\n**Nota:** Al momento de escribir este documento, la versión 5 es la más reciente de *D3.js*. ",
"_____no_output_____"
],
[
"### Inclusión de mediante el elemento *<script>*.\n\nLa forma más común de incluir la biblioteca es haciendo referencia a una URL desde la que se puede obtener la bibilioteca.\n\n``` html\n<script src=\"https://d3js.org/d3.v5.js\"></script>\n```\nTambién es posible acceder a la versión minimizada de la biblioteca.\n\n```html\n<script src=\"https://d3js.org/d3.v5.min.js\"></script>\n```\n\nDel mismo modo, es posible descargar el archivo y cargarlo de forma local.\n",
"_____no_output_____"
],
[
"**Ejemplo:**\n\nEl documento [html/d3-1.html](html/d3-1.html) contiene el siguiente código:\n\n```html\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>Ejemplo 1</title>\n <script src=\"https://d3js.org/d3.v5.min.js\"></script>\n </head>\n <body>\n <h1>Ejemplo básico de <em>D3.js</em></h1>\n <p>El siguiente párrafo será modificado mediante el uso de la biblioteca <i>D3.js</i></p>\n <p id=\"muestra\">¡Hola, mundo!</p>\n <script>\n d3.select(\"#muestra\").\n style(\"background-color\", \"gray\").\n style(\"color\", \"white\").\n style(\"font-size\", \"150%\").\n style(\"text-align\", \"center\");\n </script>\n </body>\n</html>\n```",
"_____no_output_____"
],
[
"### Inclusión como módulo.\n\nLa biblioteca es compatible con el formato de módulos de ECMAScript 2015.",
"_____no_output_____"
],
[
"## *D3.js* en una notebook de Jupyter con un kernel de iPython.\n\nEn versiones recientes de Jupyter, no es posible utilizar el elemento *<script>* dentro de una celda usando el comando mágico *%%html%%*.",
"_____no_output_____"
],
[
"### Uso de *RequireJS*.\n\n[*RequireJS*](https://requirejs.org/) es una herramienta que permite importar diversos paquetes de Javascript como si fueran módulos que ya se encuentra incluida en las notebooks de Jupyter y puede ser invicada como *require*.\n\n**Nota:** En este capítulo se explorarán sólo las funcionalidades necesarias de *RequireJS* para acceder a *D3.js*.",
"_____no_output_____"
]
],
[
[
"%%javascript\n\nrequire.config({\n paths: {\n d3: 'https://d3js.org/d3.v5.min'\n }\n })",
"_____no_output_____"
]
],
[
[
"**Ejemplo:**",
"_____no_output_____"
]
],
[
[
"%%html\n<p id=\"muestra\">Hola, mundo.</p>",
"_____no_output_____"
],
[
"%%javascript\nrequire.config({\n paths: {\n d3: 'https://d3js.org/d3.v5.min'\n }\n })\n\nrequire([\"d3\"], function(d3){\n d3.select(\"#muestra\").\n style(\"background-color\", \"gray\").\n style(\"color\", \"white\").\n style(\"font-size\", \"150%\").\n style(\"text-align\", \"center\");\n})",
"_____no_output_____"
]
],
[
[
"## El objeto *d3*.\n\nEl objeto *d3* es el componente básico mediante el cual se hace uso de las funcionalidades de la biblioteca sobre los elementos de un documento HTML.\n\nEl objeto *d3* cuenta con múltiples métodos que regresan a su vez objetos *d3*, alo cuales se les pueden ir añadiendo métodos.\n\n```\nd3.<método 1>(<argumentos 1>).<método 2>(<argumentos 2>)...<método n>(<argumentos n>);\n```\n\nDebido a que javascript no es sensible a los retornos de línea y para facilitar la lectura, se sugiere utilizar una estructura como la siguiente:\n\n```\nd3.<método 1>(<argumentos 1>).\n <método 2>(<argumentos 2>).\n ...\n <método n>(<argumentos n>);\n```\n",
"_____no_output_____"
],
[
"<p style=\"text-align: center\"><a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Licencia Creative Commons\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/80x15.png\" /></a><br />Esta obra está bajo una <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>\n<p style=\"text-align: center\">© José Luis Chiquete Valdivieso. 2019.</p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
cb5b2d95f5c1bf73bf085f8157656d037ff10995 | 14,150 | ipynb | Jupyter Notebook | Rafael's DS Class 4-1.ipynb | Johnmhv19/Starting-W-Python | 45a8d415b680b1f3539619b56a2db823376eb30d | [
"MIT"
] | null | null | null | Rafael's DS Class 4-1.ipynb | Johnmhv19/Starting-W-Python | 45a8d415b680b1f3539619b56a2db823376eb30d | [
"MIT"
] | null | null | null | Rafael's DS Class 4-1.ipynb | Johnmhv19/Starting-W-Python | 45a8d415b680b1f3539619b56a2db823376eb30d | [
"MIT"
] | null | null | null | 34.596577 | 447 | 0.55689 | [
[
[
"# Data Science Session 4",
"_____no_output_____"
],
[
"John Michael Hernandez Valerio is inviting you to a scheduled Zoom meeting.\n\nTopic: Rafael's Data Science Class 4\nTime: Mar 29, 2021 08:00 AM Beijing, Shanghai\n\nJoin Zoom Meeting\nhttps://us04web.zoom.us/j/75939938727?pwd=dVJhTXNydTV2TGxJUVZ1QVZaUnByUT09\n\nMeeting ID: 759 3993 8727\nPasscode: KNa2R4",
"_____no_output_____"
],
[
"### Today's Class\n- Concate strings: using + vs , (comma separate string) \n- Story telling with Strings and `input()`\n\n### Next Class\n- Intro to Dataframe\n- Intro to Pandas Library\n- Search for an string in a string.\n- What is a CSV file and how to read it.\n- What is a xsl or xsls file and how to read it.\n\n\n\n**Important:** Its neccesary to mention that someone may think there are some information missing, in the class video and in this document, and the reason for that is that the course have been structure for the specific purpose of the student's need ([Rafael Mesa](https://www.linkedin.com/in/rafael-mesa-rodriguez-2a1298124/ \"Rafael's LinkedIn profile\")). ",
"_____no_output_____"
],
[
"<font size=\"4\" color=\"blue\" face=\"verdana\"> <B>Concate strings: using + vs , (comma separate string) </B></font> \n\nPython provides several methods of formatting strings in the **`print()`** function beyond **string addition** \n \n**`print()`** provides using **commas** to combine stings for output \nby comma separating strings **`print()`** will output each separated by a space by default",
"_____no_output_____"
],
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Example</B></font> \n**print 3 strings on the same line using commas inside the print() function**",
"_____no_output_____"
]
],
[
[
"# [ ] print 3 strings on the same line using commas inside the print() function \n\nprint(\"Rafael\")\nprint(\"John\")\nprint(\"Daniel\")\n\nprint(\"Rafael\" + \"John\" + \"Daniel\")\nprint(\"Rafael \" + \"John \" + \"Daniel\")\nprint(\"Rafael\" +\" \"+ \"John\" +\" \"+ \"Daniel\")",
"Rafael\nJohn\nDaniel\nRafaelJohnDaniel\nRafael John Daniel\nRafael John Daniel\n"
],
[
"print(\"Rafael \" + \"John \" + \"Daniel\")\nprint(\"Rafael\" +\" \"+ \"John\" +\" \"+ \"Daniel\")\n\nprint(\"Rafael\" , \"John\" , \"Daniel\")",
"Rafael John Daniel\nRafael John Daniel\nRafael John Daniel\n"
]
],
[
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Example: Concatenating multiple elements using comma</B></font> ",
"_____no_output_____"
]
],
[
[
"# review and run code\ntime_PU= input(\"What time do you want to go to the party\")\nlocation= \"Puente de la 17\"\nprint(\"I will pick you up @\",time_PU,\"for the party.\", \"wait for me at\",location)",
"_____no_output_____"
]
],
[
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Task 1</B></font> \n\nCreate a new markdown cell below and explain what is the difference between using the addition sign (+) vs using comma (,) to concatenate elements in a print function",
"_____no_output_____"
],
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Task 2</B></font> \n\nCreate a new code cell below and provide an example of the explaination given in the previous task",
"_____no_output_____"
],
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Task 3</B></font> \n## Program: How many for the training?\nCreate a program that prints out a reservation for a training class. Gather the name of the party, the number attending and the time.\n>**example** of what input/output might look like:\n```\nenter name for contact person for training group: Hiroto Yamaguchi \nenter the total number attending the course: 7 \nenter the training time selected: 3:25 PM \n------------------------------ \nReminder: training is schedule at 3:25 PM for the Hiroto Yamaguchi group of 7 attendees \nPlease arrive 10 minutes early for the first class \n``` \n \nDesign and Create your own reminder style \n- **[ ]** get user input for variables:\n - **owner**: name of person the reservation is for \n - **num_people**: how many are attending \n - **training_time**: class time\n- **[ ]** create an integer variable **min_early**: number of minutes early the party should arrive\n- **[ ]** using comma separation, print reminder text\n - use all of the variables in the text\n - use additional strings as needed\n - use multiple print statements to format message on multiple lines (optional)",
"_____no_output_____"
]
],
[
[
"# [ ] get input for variables: owner, num_people, training_time - use descriptive prompt text\nowner = \nnum_people = \ntraining_time = \n# [ ] create a integer variable min_early and \"hard code\" the integer value (e.g. - 5, 10 or 15)\nmin_early = \n# [ ] print reminder text using all variables & add additional strings - use comma separated print formatting\n",
"_____no_output_____"
]
],
[
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Example: Telling a story using strings</B></font> \n\nRun the cell below and answer the questions to see the result.",
"_____no_output_____"
]
],
[
[
"#initialize the variables\ngirldescription = \" \" \nboydescription = \" \" \nwalkdescription = \" \" \ngirlname = \" \"\nboyname = \" \"\nanimal = \" \"\ngift = \" \" \nanswer = \" \"\n\n#Ask the user to specify values for the variables\nprint(\":) Welcome to the Awesome Stories game!!!!\")\nprint(\"Please answer the following questions and we will create a story for you\\n\")\ngirlname = input(\"Enter a girl's name: \")\nboyname = input(\"Enter a boy's name: \" )\nanimal = input(\"Name a type of animal: \" )\ngift = input(\"Name something you find in the bathroom: \")\ngirldescription = input(\"Enter a description of a flower: \")\nboydescription = input(\"Enter a description of a car: \")\nwalkdescription = input(\"Enter a description of how you might dance: \" )\nanswer = input(\"What would you say to someone who gave you a cow: \")\n\n#Display the story\n#Don't forget to format the strings when they are displayed\nprint()\nprint (\"Once upon a time,\")\nprint(\"there was a girl named \" + girlname.capitalize() + \".\")\nprint(\"One day, \" + girlname.capitalize() + \" was walking \" + walkdescription.lower() + \" down the street.\")\nprint(\"Then she met a \" + boydescription.lower() + \" boy named \" + boyname.capitalize() + \".\")\nprint(\"He said, 'You are really \" + girldescription.lower() + \"!'\")\nprint(\"She said '\" + answer.capitalize() + \", \" + boyname.capitalize() + \".'\")\nprint(\"Then they both rode away on a \" + animal.lower() + \" and lived happily ever after.\")\n#cl\n",
"_____no_output_____"
],
[
"#initialize the variables\ngirldescription = \" \" \nboydescription = \" \" \nwalkdescription = \" \" \ngirlname = \" \"\ngirlprofession= \" \"\nboyname = \" \"\nanimal = \" \"\nanswer = \" \"\naction_on_thestreet = \"\"\n\n#Ask the user to specify values for the variables\nprint(\":) Welcome to the Awesome Stories game!!!!\")\nprint(\"Please answer the following questions and we will create a story for you\\n\")\ngirlname = input(\"Enter a girl's name: \")\ngirlprofession= input(\"Enter the profession that the girl want to study\")\nboyname = input(\"Enter a boy's name: \" )\nanimal = input(\"Name a type of animal: \" )\nboyprofession = input(\"Enter the profession: \")\nnewyorkstreet = input(\"Enter a famous street of new york: \" )\naction_on_thestreet= input(\"the action they where doing on the street\")\n\n#Display the story\n#Don't forget to format the strings when they are displayed\nprint()\nprint (\"In 2012 in the city of New Work,\")\nprint(\"there was a girl named\",girlname.capitalize(),\"that have a dream to be a great\", girlprofession + \".\")\nprint(\"with the vision to save people with cancer disease\")\nprint(\"12 years later in 2025\")\n#print(girlname.capitalize(), \"was walking\",newyorkstreet(),\"visiting the New York Stock Exchange with her friend that came from cameron\", action_on_thestreet.lower(),\".\")\nprint(\"By coincidence\",boyprofession.lower(),\"that she knew walkout of a building\",\"his name was\",boyname.capitalize(),\"that boy was a cancer pacient stage 4 that she had cure\",\".\")\nprint(\"He said, Do you are\",girlname.capitalize(), girldescription.lower(),'that save my life'\"!\")\nprint(\"She said OMG you are\", boyname, answer.capitalize() + \", \" + boyname.capitalize() + \".'\")\nprint(\"Then they both decided to meet up later at\" + animal.lower() + \"to go have dinner after.\")\n#cl",
"_____no_output_____"
]
],
[
[
"<font size=\"4\" color=\"#B24C00\" face=\"verdana\"> <B>Task 4</B></font> \n\nRun the cell below.",
"_____no_output_____"
]
],
[
[
"#Run this and see what happens\n\nmessage= \"tU TIENES la Pampara enceDIA en DS.\"\n\nprint(\"Option 1:\", message,\"\\n\")\nprint(\"Option 2:\", message.lower(),\"\\n\")\nprint(\"Option 3:\", message.upper(),\"\\n\")\nprint(\"Option 4:\", message.capitalize(),\"\\n\")\nprint(\"Option 5:\", message.title(),\"\\n\")\n\n",
"Option 1: tU TIENES la Pampara enceDIA en DS. \n\nOption 2: tu tienes la pampara encedia en ds. \n\nOption 3: TU TIENES LA PAMPARA ENCEDIA EN DS. \n\nOption 4: Tu tienes la pampara encedia en ds. \n\nOption 5: Tu Tienes La Pampara Encedia En Ds. \n\n"
]
],
[
[
" ### Explain what happened in the cell above.",
"_____no_output_____"
]
],
[
[
"print(\"cincO\"==\"cinco\")",
"False\n"
]
],
[
[
"### Print the variable message and answer: Is the message (the size of its letters) different from the beggining?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5b31b0f56e6a6a15b8ac71aab80982f89c6849 | 3,253 | ipynb | Jupyter Notebook | Amazon_scraper_updated.ipynb | creativelyContent/WebScraper---Amazon | fb1edf28aae44308f0f6d58ca77a8e5866a8fa9a | [
"MIT"
] | null | null | null | Amazon_scraper_updated.ipynb | creativelyContent/WebScraper---Amazon | fb1edf28aae44308f0f6d58ca77a8e5866a8fa9a | [
"MIT"
] | null | null | null | Amazon_scraper_updated.ipynb | creativelyContent/WebScraper---Amazon | fb1edf28aae44308f0f6d58ca77a8e5866a8fa9a | [
"MIT"
] | null | null | null | 28.535088 | 146 | 0.474639 | [
[
[
"import requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nimport re\n\n\npage = 3\nn = '3A1374407031'\ntitle = 'beauty'\n\n\n\ndef d_data(page):\n HEADERS = ({'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',\n 'Accept-Language': 'en-US, en;q=0.5'})\n \n r = requests.get(f\"https://www.amazon.in/s?i={title}&rh=n%{n}&fs=true&page={page}\", headers=HEADERS)\n print(r)\n print(f\"https://www.amazon.in/s?i={title}&rh=n%{n}&fs=true&page={page}\")\n\n content = r.content\n s = BeautifulSoup(content,'html5lib')\n \n alls=[]\n \n for d in s.find_all(attrs={'class':'a-section a-spacing-medium'}):\n name = d.find('span', attrs={\"class\":\"a-size-base-plus a-color-base a-text-normal\"}) \n price = d.find('span', attrs={\"class\":\"a-offscreen\"})\n img = d.find('img',attrs={'class':'s-image'})\n \n all1 = []\n \n if name is not None:\n all1.append(name.text)\n else:\n all1.append('Unknown')\n \n if price is not None:\n all1.append(price.text)\n else:\n all1.append('0')\n \n if img is not None:\n all1.append(img['src'])\n else:\n all1.append('No image')\n \n alls.append(all1)\n \n return alls\n\nresults = []\n\nfor i in range(1,page):\n results.append(d_data(i))\n\nflatten = lambda l : [item for sublist in l for item in sublist]\n\ndf = pd.DataFrame(flatten(results), columns = ['Product Name','Price','Image'])\ndf.to_csv(f'amazon_products_newzone2_{title}.csv', mode='a', index=\"false\", encoding = 'utf-8')",
"<Response [200]>\nhttps://www.amazon.in/s?i=beauty&rh=n%3A1374407031&fs=true&page=1\n<Response [200]>\nhttps://www.amazon.in/s?i=beauty&rh=n%3A1374407031&fs=true&page=2\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
cb5b3d5115ace6272b5521ded37b37f3daf8523c | 6,864 | ipynb | Jupyter Notebook | GICAF Usage Example.ipynb | Brauntt/Gicaf_new | b3617aeb3c3569ca49ac53bb079e95b1ad03f590 | [
"MIT"
] | null | null | null | GICAF Usage Example.ipynb | Brauntt/Gicaf_new | b3617aeb3c3569ca49ac53bb079e95b1ad03f590 | [
"MIT"
] | null | null | null | GICAF Usage Example.ipynb | Brauntt/Gicaf_new | b3617aeb3c3569ca49ac53bb079e95b1ad03f590 | [
"MIT"
] | null | null | null | 19.066667 | 153 | 0.530012 | [
[
[
"## Clone GICAF",
"_____no_output_____"
]
],
[
[
"# !git clone https://github.com/gasim97/gicaf.git",
"_____no_output_____"
]
],
[
[
"## Install dependencies\nIMPORTANT: before proceeding, restart the colab runtime after running this cell for the first time in a session\n\nNOTE: Brevitas, a dependency, was in alpha stage at the time of writing, as such usage of Brevitas is not considered stable",
"_____no_output_____"
]
],
[
[
"import gicaf.Dependencies\ngicaf.Dependencies.install()",
"_____no_output_____"
]
],
[
[
"## Imports",
"_____no_output_____"
]
],
[
[
"from gicaf.LoadData import LoadData\nimport gicaf.models.TfLiteModels as tlmodels\nfrom gicaf.attacks.AdaptiveSimBA import AdaptiveSimBA\nfrom gicaf.AttackEngine import AttackEngine\nfrom gicaf.Logger import Logger\nimport gicaf.Utils as utils\nimport matplotlib.pyplot as plt\nfrom os.path import abspath\nimport logging\nlogging.basicConfig(level=logging.INFO)",
"_____no_output_____"
]
],
[
[
"## Load experiment data from Google Drive\n",
"_____no_output_____"
]
],
[
[
"utils.load_tmp_from_gdrive()",
"_____no_output_____"
]
],
[
[
"## Set-up model",
"_____no_output_____"
],
[
"### TfLite",
"_____no_output_____"
]
],
[
[
"parentdir = abspath('')\nloadData = LoadData(ground_truth_file_path=parentdir + \"/gicaf/data/val.txt\", img_folder_path=parentdir + \"/gicaf/data/ILSVRC2012_img_val/\")\n\nmodel = tlmodels.EfficientNetB7(loadData=loadData, bit_width=16)",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
],
[
"### Load and preprocess data",
"_____no_output_____"
]
],
[
[
"data_generator = loadData.get_data([(100, 120)], model.metadata)",
"_____no_output_____"
]
],
[
[
"### Save preprocessed data",
"_____no_output_____"
]
],
[
[
"loadData.save(data_generator, \"ILSVRC2012_val_100_to_120_EfficientNetB7\")",
"_____no_output_____"
]
],
[
[
"### Load saved preprocessed data",
"_____no_output_____"
]
],
[
[
"data_generator = loadData.load(\"ILSVRC2012_val_100_to_120_EfficientNetB7\", [(7, 7)])",
"_____no_output_____"
]
],
[
[
"## Run attack",
"_____no_output_____"
]
],
[
[
"attacks = [\n AdaptiveSimBA(size=8, epsilon=64/255)\n]",
"_____no_output_____"
],
[
"metrics = [\n 'absolute-value norm', \n 'psnr', \n 'ssim'\n]",
"_____no_output_____"
],
[
"attack_engine = AttackEngine(data_generator, model, attacks)\nloggers, success_rates = attack_engine.run(metric_names=metrics)\nattack_engine.close() # save experiment logs",
"_____no_output_____"
],
[
"success_rates",
"_____no_output_____"
]
],
[
[
"## Analyse data",
"_____no_output_____"
]
],
[
[
"logger = Logger()\nlogger.load(1)",
"_____no_output_____"
],
[
"logs = logger.get_all()",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,7))\nwith plt.style.context('seaborn-whitegrid'):\n plt.plot(logs[0]['ssim'])",
"_____no_output_____"
]
],
[
[
"## Save experiment data to Google Drive",
"_____no_output_____"
]
],
[
[
"utils.save_tmp_to_gdrive()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb5b422d7c4ef21b4d162013abbe26dfbc99774a | 186,261 | ipynb | Jupyter Notebook | examples/generators/howtos/line_segment.ipynb | jamesobutler/porespy | ec9791e63db6e6a1281e364f4d2ea5d3796f70c9 | [
"MIT"
] | 165 | 2016-01-07T19:15:58.000Z | 2022-03-24T16:24:54.000Z | examples/generators/howtos/line_segment.ipynb | jamesobutler/porespy | ec9791e63db6e6a1281e364f4d2ea5d3796f70c9 | [
"MIT"
] | 550 | 2016-02-28T22:49:06.000Z | 2022-03-30T13:33:17.000Z | examples/generators/howtos/line_segment.ipynb | jamesobutler/porespy | ec9791e63db6e6a1281e364f4d2ea5d3796f70c9 | [
"MIT"
] | 81 | 2015-08-20T05:14:25.000Z | 2022-03-20T09:09:58.000Z | 52.660729 | 198 | 0.495692 | [
[
[
"# line_segment",
"_____no_output_____"
],
[
"#### Calculate the voxel coordinates of a straight line between the two given end points",
"_____no_output_____"
],
[
"## Import packages",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport porespy as ps\nimport scipy\n\nps.visualization.set_mpl_style()\nnp.random.seed(10)",
"_____no_output_____"
]
],
[
[
"## Apply generator function:",
"_____no_output_____"
],
[
"### 2D example",
"_____no_output_____"
],
[
"#### Create input points:",
"_____no_output_____"
]
],
[
[
"X0 = [1, 200]\nX1 = [100, 1]\n\n_2dx = [X0[0], X1[0]]\n_2dy = [X0[1], X1[1]]\n\nplt.figure(figsize=[4, 4])\nplt.plot(_2dx, _2dy, 'o', color='black');",
"_____no_output_____"
]
],
[
[
"#### Generate line segment:",
"_____no_output_____"
]
],
[
[
"[_2dx, _2dy] = ps.generators.line_segment(X0=X0, X1=X1)\n\nplt.figure(figsize=[4, 4])\nplt.plot(_2dx, _2dy, 'o', color='black');",
"_____no_output_____"
]
],
[
[
"### 3D example",
"_____no_output_____"
],
[
"#### Create input points:",
"_____no_output_____"
]
],
[
[
"X0 = [50, 10, 100]\nX1 = [200, 100, 300]\n\n_3dx = [X0[0], X1[0]]\n_3dy = [X0[1], X1[1]]\n_3dz = [X0[2], X1[2]]\n\nfig = plt.figure(figsize=[4, 4])\nax = fig.add_subplot(111, projection='3d')\nax.scatter(_3dx, _3dy, _3dz, 'o', color='black');",
"_____no_output_____"
]
],
[
[
"#### Generate line segment:",
"_____no_output_____"
]
],
[
[
"_3dx, _3dy, _3dz = ps.generators.line_segment(X0=X0, X1=X1)\n\nfig = plt.figure(figsize=[4, 4])\nax = fig.add_subplot(111, projection='3d')\nax.scatter(_3dx, _3dy, _3dz, 'o', color='black');",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb5b5127785631df48a304af527daeddbbdff270 | 2,807 | ipynb | Jupyter Notebook | talks/DevSummit2018/ArcGIS API for Python - Introduction to Scripting Your Web GIS/Demo_ContentPubs/L09_PublishingFeatureLayerFromCSV.ipynb | nitz21/arcpy | 36074b5d448c9cfdba166332e99100afb3390824 | [
"Apache-2.0"
] | 2 | 2020-11-23T23:06:04.000Z | 2020-11-23T23:06:07.000Z | talks/DevSummit2018/ArcGIS API for Python - Introduction to Scripting Your Web GIS/Demo_ContentPubs/L09_PublishingFeatureLayerFromCSV.ipynb | josemartinsgeo/arcgis-python-api | 4c10bb1ce900060959829f7ac6c58d4d67037d56 | [
"Apache-2.0"
] | null | null | null | talks/DevSummit2018/ArcGIS API for Python - Introduction to Scripting Your Web GIS/Demo_ContentPubs/L09_PublishingFeatureLayerFromCSV.ipynb | josemartinsgeo/arcgis-python-api | 4c10bb1ce900060959829f7ac6c58d4d67037d56 | [
"Apache-2.0"
] | 1 | 2020-06-06T21:21:18.000Z | 2020-06-06T21:21:18.000Z | 24.840708 | 111 | 0.569291 | [
[
[
"from arcgis.gis import GIS\nimport getpass\n\npassword = getpass.getpass(\"Enter password: \")\ngis = GIS(\"https://www.arcgis.com\", \"<username>\", password)\nprint(\"Connected to {}\".format(gis.properties.urlKey + \".\" + gis.properties.customBaseUrl))",
"_____no_output_____"
],
[
"csv_path = r\"/pathway/to/dataset/NYC_Emergency_Response_Incidents_PointsXY.csv\"\ncsv_properties={'title':'Emergency Response Incidents NYC ',\n 'description':'Emergency Response Incidents in Manhattan',\n 'tags':'nyc, Emergency Response'}\nthumbnail_path = r\"/Users/john3092/Projects/DevSummits/PS_2018/imgs/emer_response.jpg\"\n\nEmergency_Response_Incidents_csv_item = gis.content.add(item_properties=csv_properties, data=csv_path,\n thumbnail = thumbnail_path)",
"_____no_output_____"
],
[
"Emergency_Response_Incidents_csv_item",
"_____no_output_____"
],
[
"Emergency_Response_Incidents_feature_layer_item = Emergency_Response_Incidents_csv_item.publish()",
"_____no_output_____"
],
[
"Emergency_Response_Incidents_feature_layer_item",
"_____no_output_____"
],
[
"search_result = gis.content.search('title:Emergency Response Incidents NYC',\n item_type = 'Feature Layer')\nsearch_result",
"_____no_output_____"
],
[
"map1 = gis.map('New York')\nResponse_Incidents_item = search_result[0]\nmap1.add_layer(Response_Incidents_item)\nmap1",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5b611660c06cb4c2d4f3fdff7132d872df6c62 | 958,295 | ipynb | Jupyter Notebook | tutorial/tutorial.ipynb | abc-research/bento2021 | a604543efc01e8734c7600060a240e0f101f0f51 | [
"MIT"
] | null | null | null | tutorial/tutorial.ipynb | abc-research/bento2021 | a604543efc01e8734c7600060a240e0f101f0f51 | [
"MIT"
] | null | null | null | tutorial/tutorial.ipynb | abc-research/bento2021 | a604543efc01e8734c7600060a240e0f101f0f51 | [
"MIT"
] | null | null | null | 843.569542 | 900,556 | 0.950067 | [
[
[
"# Bento Activity Recognition Tutorial:\n\n",
"_____no_output_____"
],
[
"This notebook has been designed for the bento activity challenge recognition competition with the the aim of providing the basic knowledge of Human Activity Recognition by MOCAP.\n\nIt has been made by Nazmun Nahid.",
"_____no_output_____"
],
[
"# Library import:\nHere we are going to use pandas(https://pandas.pydata.org/docs/user_guide/index.html), numpy(https://numpy.org/devdocs/user/whatisnumpy.html) and matplotlib(https://matplotlib.org/stable/contents.html).\n\n\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n%matplotlib inline \nimport matplotlib.pyplot as plt\nplt.rcParams[\"figure.figsize\"] = (19,15) ",
"_____no_output_____"
]
],
[
[
"# Read Data:\nFirst, we have to load the data in the data frame.",
"_____no_output_____"
]
],
[
[
"df=pd.read_csv('/content/drive/MyDrive/Tutorial/Tutorial.csv')",
"_____no_output_____"
]
],
[
[
"Now let's check what information the data contains!",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"So, here we can see in the data file there are many rows and columns. Do you want to know the exact number of rows and columns?",
"_____no_output_____"
]
],
[
[
"df.shape",
"_____no_output_____"
]
],
[
[
"# Data Visualization:",
"_____no_output_____"
],
[
"Now, let's see how the data looks like!",
"_____no_output_____"
]
],
[
[
"df.plot()",
"_____no_output_____"
],
[
"df['activity'].value_counts().plot.bar()",
"_____no_output_____"
]
],
[
[
"# Pre-processing:\nIn the preprocessing stage we need to first focus on the missing values. Let's check if our data have any missing values.",
"_____no_output_____"
]
],
[
[
"df.isnull().sum().sum()",
"_____no_output_____"
],
[
"print(df.isnull().sum())",
"X1 0\nY1 0\nZ1 0\nX2 0\nY2 0\nZ2 0\nX3 0\nY3 0\nZ3 0\nX4 156\nY4 156\nZ4 156\nX5 0\nY5 0\nZ5 0\nX6 0\nY6 0\nZ6 0\nX7 17\nY7 17\nZ7 17\nX8 5\nY8 5\nZ8 5\nX9 0\nY9 0\nZ9 0\nX10 0\nY10 0\nZ10 0\nX11 21\nY11 21\nZ11 21\nX12 60\nY12 60\nZ12 60\nX13 22\nY13 22\nZ13 22\nsubject_id 0\nactivity 0\ndtype: int64\n"
]
],
[
[
"We have some missing values. So, we have to keep that in mind while handling the data. To work with this data we will devide the whole data into smaller segments.",
"_____no_output_____"
]
],
[
[
"def segmentation(x_data,overlap_rate,time_window):\n \n # make a list for segment window and its label\n seg_data = []\n y_segmented_list = []\n\n #convert overlap rate to step for sliding window\n overlap = int((1 - overlap_rate)*time_window)\n \n #segment and keep the labels\n for i in range(0,x_data.shape[0],overlap):\n seg_data.append(x_data[i:i+time_window])\n y_segmented_list.append(x_data['activity'][i])\n \n return seg_data,y_segmented_list",
"_____no_output_____"
],
[
"\n#Segmentation with overlaprate=0 & window=100\ndf1_itpl=df.interpolate()\n#replace missing values with 0\ndf1_itpl=df1_itpl.fillna(0) \n[seg, seg_label]=segmentation(df1_itpl,0.5,350)\n ",
"_____no_output_____"
]
],
[
[
"# Feature Extarction:\nThere are many types of features. For ease of use we have shown only some very common features.",
"_____no_output_____"
]
],
[
[
"def get_features(x_data):\n #Set features list\n features = []\n #Set columns name list\n DFclist=list(x_data.columns)\n\n #Calculate features (STD, Average, Max, Min) for each data columns X Y Z \n for k in DFclist:\n # std\n features.append(x_data[k].std(ddof=0))\n # avg\n features.append(np.average(x_data[k]))\n # max\n features.append(np.max(x_data[k]))\n # min\n features.append(np.min(x_data[k]))\n return features",
"_____no_output_____"
],
[
"#set list\nfeatures_list=[]\nlabel_list=[]\nfor j in range(0,len(seg)):\n #extract only xyz columns\n frame1=seg[j].drop(columns=['subject_id','activity'])\n \n\n #Get features and label for each elements\n features_list.append(get_features(frame1))\n label_list.append(seg_label[j])",
"_____no_output_____"
]
],
[
[
"Now we have a feature list and lablel list. Next step is classification.",
"_____no_output_____"
],
[
"# Training:",
"_____no_output_____"
],
[
"For classification there are several models. Here we are using one of the most commonly used model Random Forest.",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier \nmodel_ml = RandomForestClassifier(n_estimators=500,n_jobs=-1)",
"_____no_output_____"
]
],
[
[
"Here we only have one subject. So, we will divide data from this subject into train and test file to evaluate the results. For more than one subject you can also put one subject in testing and others in training.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(features_list, label_list, test_size=0.3, random_state=42)",
"_____no_output_____"
]
],
[
[
"Now let's train the model!",
"_____no_output_____"
]
],
[
[
"model_ml.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"The training is complete but how can we see the results? For that we will here use classification report with which we can see the accuracy, precision, recall and f1 score. We will also use confusion matrix for the evaluation.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import classification_report\nfrom sklearn.metrics import plot_confusion_matrix\nfrom sklearn.metrics import confusion_matrix\n\ny_predict = model_ml.predict(X_test)\nprint(classification_report(y_test,y_predict))\n#confusion_matrix(y_test, y_predict)\nplot_confusion_matrix(model_ml, X_test, y_test)\nplt.show()",
" precision recall f1-score support\n\n 1 0.88 0.79 0.83 47\n 2 0.87 0.84 0.85 56\n 3 0.96 0.90 0.93 52\n 4 0.87 0.85 0.86 48\n 5 0.78 0.93 0.85 56\n\n accuracy 0.86 259\n macro avg 0.87 0.86 0.87 259\nweighted avg 0.87 0.86 0.87 259\n\n"
]
],
[
[
"We have successfully completed learning to read the data, visualize data, pre-processing, feature extraction, classification and evaluation of the generated model. Now it's your turn to generate a model following these steps and predict the labels of the test data. Best of luck!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb5b8e822b444ef9e2b4e55e46159ff11db647fe | 539 | ipynb | Jupyter Notebook | Arensdorf orbit.ipynb | la3lma/julia-playground | 4bf82651025466f9364a0cef47fa8d6ebb61909e | [
"Apache-2.0"
] | null | null | null | Arensdorf orbit.ipynb | la3lma/julia-playground | 4bf82651025466f9364a0cef47fa8d6ebb61909e | [
"Apache-2.0"
] | null | null | null | Arensdorf orbit.ipynb | la3lma/julia-playground | 4bf82651025466f9364a0cef47fa8d6ebb61909e | [
"Apache-2.0"
] | null | null | null | 16.333333 | 64 | 0.517625 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb5ba013b10dea0b11cf57b03f5fec1b4d41eb2e | 23,864 | ipynb | Jupyter Notebook | Vertiefungstage1_2/11 Pandas Teil 3/Date+Time Basics.ipynb | Priskawa/kurstag2 | 028d5b07011d7ddc2b2416aa40b7f94dee134614 | [
"MIT"
] | null | null | null | Vertiefungstage1_2/11 Pandas Teil 3/Date+Time Basics.ipynb | Priskawa/kurstag2 | 028d5b07011d7ddc2b2416aa40b7f94dee134614 | [
"MIT"
] | null | null | null | Vertiefungstage1_2/11 Pandas Teil 3/Date+Time Basics.ipynb | Priskawa/kurstag2 | 028d5b07011d7ddc2b2416aa40b7f94dee134614 | [
"MIT"
] | null | null | null | 22.075856 | 207 | 0.500838 | [
[
[
"# Date+Time Basics",
"_____no_output_____"
],
[
"**Inhalt:** Mit Zeit-Datentyp umgehen\n\n**Nötige Skills:** Erste Schritte mit Pandas\n\n**Lernziele:**\n- Text in Zeit konvertieren\n- Zeit in Text konvertieren\n- Zeit-Informationen extrahieren\n- Einfache Zeit-Operationen",
"_____no_output_____"
],
[
"## Libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"from datetime import datetime",
"_____no_output_____"
],
[
"from datetime import timedelta",
"_____no_output_____"
]
],
[
[
"## Zeitformat Codes",
"_____no_output_____"
],
[
"Extrakt, die volle Liste: http://strftime.org/. Diese Format-Codes brauchen wir, um mit Daten zu arbeiten.",
"_____no_output_____"
],
[
"| Code | Description | *Example* |\n|--------|---------|--------|\n| **`%a`** | Weekday as locale’s abbreviated name. | *Mon* |\n| **`%A`** | Weekday as locale’s full name. | *Monday* |\n| **`%d`** | Day of the month as a zero-padded decimal number. | *30* |\n| **`%-d`** | Day of the month as a decimal number. (Platform specific) | *30* |\n| **`%b`** | Month as locale’s abbreviated name. | *Sep* |\n| **`%B`** | Month as locale’s full name. | *September* |\n| **`%m`** | Month as a zero-padded decimal number. | *09* |\n| **`%-m`** | Month as a decimal number. (Platform specific) | *9* |\n| **`%y`** | Year without century as a zero-padded decimal number. | *13* |\n| **`%Y`** | Year with century as a decimal number. | *2013* |\n| **`%H`** | Hour (24-hour clock) as a zero-padded decimal number. | *07* |\n| **`%-H`** | Hour (24-hour clock) as a decimal number. (Platform specific) | *7* |\n| **`%I`** |\tHour (12-hour clock) as a zero-padded decimal number. \t| *07* |\n| **`%-I`** |\tHour (12-hour clock) as a decimal number. (Platform specific) \t| *7* |\n| **`%p`** |\tLocale’s equivalent of either AM or PM. \t| *AM* |\n| **`%M`** | Minute as a zero-padded decimal number. | *06* |\n| **`%-M`** | Minute as a decimal number. (Platform specific) | *6* |\n| **`%S`** | Second as a zero-padded decimal number. | *05* |\n| **`%-S`** | Second as a decimal number. (Platform specific) | *5* |\n| **`%j`** | Day of the year as a zero-padded decimal number. | *273* |\n| **`%-j`** | Day of the year as a decimal number. (Platform specific) | *273* |\n| **`%U`** | Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0. | *39* |\n| **`%W`** | Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0. | *39* |\n| **`%c`** | Locale’s appropriate date and time representation. | *Mon Sep 30 07:06:05 2013* |\n| **`%x`** | Locale’s appropriate date representation. | *09/30/13* |\n| **`%X`** | Locale’s appropriate time representation. | *07:06:05* |\n| **`%%`** | A literal '%' character. | *%*",
"_____no_output_____"
],
[
"## Text to Time",
"_____no_output_____"
],
[
"Eine häufige Situation, wenn man von irgendwo Daten importiert:\n- Wir haben einen bestimmten String, zB: \"1981-08-23\"\n- Wir wollen den String in ein Datetime-Objekt verwandeln, um sie zu analysieren\n- Dazu benutzen wir die Pandas-Funktion `to_datetime()`",
"_____no_output_____"
]
],
[
[
"my_birthday_date = pd.to_datetime('1981-08-23', format='%Y-%m-%d') \n\n#funktion heisst .to_datetime: nimm den Textstring und mach ein Datum daraus.",
"_____no_output_____"
]
],
[
[
"Das Ergebnis wird uns als \"Timestamp\" angezeigt.",
"_____no_output_____"
]
],
[
[
"my_birthday_date",
"_____no_output_____"
]
],
[
[
"Die Funktion erkennt einige Standardformate automatisch",
"_____no_output_____"
]
],
[
[
"my_date = pd.to_datetime('1972-03-11 13:42:25')",
"_____no_output_____"
],
[
"my_date",
"_____no_output_____"
]
],
[
[
"**Platz** zum ausprobieren. Kreiere ein Datetime-Objekt aus folgenden Strings:",
"_____no_output_____"
]
],
[
[
"# Beispiel: '23.08.1981'\nmy_date = pd.to_datetime('23.08.1981', format='%d.%m.%Y')\nmy_date",
"_____no_output_____"
],
[
"# Do it yourself: 'Aug 23, 1981'\n\ndat1 = pd.to_datetime(\"Aug 23, 1981\", format=\"%b %d, %Y\")\ndat1",
"_____no_output_____"
],
[
"# '18.01.2016, 18:25 Uhr'\n\ndat2 = pd.to_datetime(\"18.01.2016, 18:25 Uhr\" , format= \"%d.%m.%Y, %H:%M Uhr\")\ndat2",
"_____no_output_____"
],
[
"# '5. May 2014'\ndat3 = pd.to_datetime(\"5. May 2014\", format='%d. %B %Y')\ndat3",
"_____no_output_____"
],
[
"(\"5. Mai 2014\").replace(\"Mai\", \"May\")",
"_____no_output_____"
],
[
"# '5. Mai 2014'\nmy_date = pd.to_datetime('5. Mai 2014'.replace('Mai','May'), format='%d. %B %Y')\nmy_date",
"_____no_output_____"
]
],
[
[
"## Time to Text",
"_____no_output_____"
],
[
"Brauchen wir typischerweise bei der Anzeige oder beim Export von DAten\n- Wir haben bereits ein Datetime-Objekt erstellt \n- jetzt wollen wir es nach einem bestimmten Schema anzeigen\n- dafür dient die Funktion `strftime()`, die jedes Datetime-Objekt hat",
"_____no_output_____"
],
[
"Das Datums-Ojbekt haben wir bereits:",
"_____no_output_____"
]
],
[
[
"my_date = pd.to_datetime('1981-08-23 08:15:25')",
"_____no_output_____"
]
],
[
[
"Als Text:",
"_____no_output_____"
]
],
[
[
"my_text = my_date.strftime(format='%Y-%m-%d')\n\n# .strftime() (wie string for time), formatiere eine Zeit (Timestamp) als String. Gib ein sauberes Datum aus",
"_____no_output_____"
],
[
"my_text",
"_____no_output_____"
]
],
[
[
"**Quiz**: Lass `strftime()` den folgenden Text ausgeben:",
"_____no_output_____"
]
],
[
[
"# Beispiel: 'Aug 23, 1981'\nmy_text = my_date.strftime(format=\"%b %d, %Y\")\nmy_text",
"_____no_output_____"
],
[
"# Do it yourself: #'23.8.81, 08:15:25'\ntext1 = my_date.strftime(format=\"%d.%m.%Y, %H:%M:%S'\")\ntext1",
"_____no_output_____"
],
[
"# 'Sunday, 23. of August 1981, 8:15 AM'\ntext2 = my_date.strftime(format=\"%A, %d. of %B, %-I:%M %p\")\ntext2",
"_____no_output_____"
]
],
[
[
"## Time properties",
"_____no_output_____"
],
[
"`strftime()` ist nicht die einzige Möglichkeit, Daten als Text anzuzeigen.\n\nWir können auch direkt einzelne Eigenschaften eines Datetime-Objekts abfragen.",
"_____no_output_____"
],
[
"Taken from https://pandas.pydata.org/pandas-docs/stable/timeseries.html\n\n| Property | Description |\n|----------|------------|\n| **`.year`** | - The year of the datetime |\n| **`.month`** | - The month of the datetime |\n| **`.day`** | - The days of the datetime |\n| **`.hour`** | - The hour of the datetime |\n| **`.minute`** | - The minutes of the datetime |\n| **`.second`** | - The seconds of the datetime |\n| **`.microsecond`** | - The microseconds of the datetime |\n| **`.nanosecond`** | - The nanoseconds of the datetime |\n| **`.date`** | - Returns datetime.date (does not contain timezone information) |\n| **`.time`** | - Returns datetime.time (does not contain timezone information) |\n| **`.dayofyear`** | - The ordinal day of year |\n| **`.weekofyear`** | - The week ordinal of the year |\n| **`.week`** | - The week ordinal of the year |\n| **`.dayofweek`** | - The number of the day of the week with Monday=0, Sunday=6 |\n| **`.weekday`** | - The number of the day of the week with Monday=0, Sunday=6 |\n| **`.weekday_name`** | - The name of the day in a week (ex: Friday) |\n| **`.quarter`** | - Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc. |\n| **`.days_in_month`** | - The number of days in the month of the datetime |\n| **`.is_month_start`** | - Logical indicating if first day of month (defined by frequency) |\n| **`.is_month_end`** | - Logical indicating if last day of month (defined by frequency) |\n| **`.is_quarter_start`** | - Logical indicating if first day of quarter (defined by frequency) |\n| **`.is_quarter_end`** | - Logical indicating if last day of quarter (defined by frequency) |\n| **`.is_year_start`** | - Logical indicating if first day of year (defined by frequency) |\n| **`.is_year_end`** | - Logical indicating if last day of year (defined by frequency) |\n| **`.is_leap_year`** | - Logical indicating if the date belongs to a leap year |",
"_____no_output_____"
],
[
"Das funktioniert dann ganz einfach:",
"_____no_output_____"
]
],
[
[
"my_date.year",
"_____no_output_____"
],
[
"my_date.day",
"_____no_output_____"
],
[
"my_date.is_month_start",
"_____no_output_____"
]
],
[
[
"**Quiz**:",
"_____no_output_____"
]
],
[
[
"# In welcher Jahreswoche liegt unser Datum `my_date`?\nmy_date.weekofyear\n",
"_____no_output_____"
],
[
"# Um was für einen Wochentag handelt es sich (Zahl)?\nmy_date.dayofweek",
"_____no_output_____"
]
],
[
[
"## Zeitintervalle",
"_____no_output_____"
],
[
"\"Timedelta\" ist ein spezieller Datentyp, der kein Datum, sondern einen Zeitintervall modelliert.\n\nWir können diesen Datentyp z.B. für Vergleiche zwischen zwei Daten brauchen. ",
"_____no_output_____"
],
[
"Die folgenden Intervalle stehen uns dabei zur Verfügung:\n\n**`weeks`** - Wochen\n\n**`days`** - Tage\n\n**`hours`** - Stunden\n\n**`minutes`** - Minuten\n\n**`seconds`** - Sekunden\n\n**`microseconds`** - Mikrosekunden",
"_____no_output_____"
],
[
"Ein Intervall erstellen wir mit der Funktion `timedelta()`",
"_____no_output_____"
]
],
[
[
"d = timedelta(days=2)\nd",
"_____no_output_____"
],
[
"d = timedelta(hours=1)\nd",
"_____no_output_____"
]
],
[
[
"Wir können die Argumente beliebig kombinieren",
"_____no_output_____"
]
],
[
[
"d = timedelta(days=3, hours=10, minutes=25, seconds=10)\nd",
"_____no_output_____"
]
],
[
[
"Wir können ein Zeitintervall zu einem Datetime-Objekt addieren oder subtrahieren:",
"_____no_output_____"
]
],
[
[
"my_date + d",
"_____no_output_____"
],
[
"my_date - d",
"_____no_output_____"
]
],
[
[
"Ein Timedelta erhalten wir auch, wenn wir die Differenz von zwei Daten bilden:",
"_____no_output_____"
]
],
[
[
"my_date_1 = pd.to_datetime('1981-08-23', format='%Y-%m-%d')\nmy_date_2 = pd.to_datetime('1981-08-25', format='%Y-%m-%d')\nd = my_date_2 - my_date_1",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
]
],
[
[
"Die Info erhalten wir wiederum, indem wir die Eigenschaft abfragen:",
"_____no_output_____"
]
],
[
[
"d.days",
"_____no_output_____"
]
],
[
[
"**Quiz:** Wie viele Tage liegen zwischen folgenden Daten?",
"_____no_output_____"
]
],
[
[
"my_string_1 = '2001/09/11'\nmy_string_2 = '2016/11/09'",
"_____no_output_____"
],
[
"#Antwort\nmy_date_1 = pd.to_datetime(my_string_1, format='%Y/%m/%d')\nmy_date_2 = pd.to_datetime(my_string_2, format='%Y/%m/%d')\nd = my_date_2 - my_date_1\nd.days",
"_____no_output_____"
]
],
[
[
"**Quiz:** Ich werde ab dem 1. Januar 2019 um 0:00 Uhr während 685648 Sekunden keinen Alkohol trinken. An welchem Datum greife ich wieder zum Glas?",
"_____no_output_____"
]
],
[
[
"#Antwort\n\n#Startdatum als Variable kreieren\nneujahr= pd.to_datetime(\"2019-01-01\", format=\"%Y-%m-%d\")\n#Timedelta kreieren\nd= timedelta(seconds=685648)\n\n#Variable für Datum und Timedelta erstellen\ntrinkstart = neujahr + d\n\n#Resultat ausgeben als String\ntrinkstart.strftime(format=\"%Y-%m-%d\")",
"_____no_output_____"
]
],
[
[
"## Hier und Jetzt",
"_____no_output_____"
],
[
"Last but not least: eine Funktion, die uns das aktuelle Datum samt Zeit angibt:",
"_____no_output_____"
]
],
[
[
"jetzt = datetime.today()",
"_____no_output_____"
],
[
"jetzt",
"_____no_output_____"
]
],
[
[
"Wir können dieses Datum wie jedes andere Datum auch anzeigen:",
"_____no_output_____"
]
],
[
[
"jetzt.strftime(format='%Y-%m-%d: %H:%M:%S')",
"_____no_output_____"
]
],
[
[
"Wir können auch damit herumrechnen:",
"_____no_output_____"
]
],
[
[
"d = timedelta(days=1)",
"_____no_output_____"
],
[
"(jetzt - d).strftime(format='%Y-%m-%d: %H:%M:%S')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb5bae79b67de1140ca942612a9d97845c4ad3af | 27,465 | ipynb | Jupyter Notebook | work/circular_law.ipynb | iiineco/analysis | 930df13ecd6ff771b719882c80142d7a57121227 | [
"MIT"
] | null | null | null | work/circular_law.ipynb | iiineco/analysis | 930df13ecd6ff771b719882c80142d7a57121227 | [
"MIT"
] | null | null | null | work/circular_law.ipynb | iiineco/analysis | 930df13ecd6ff771b719882c80142d7a57121227 | [
"MIT"
] | null | null | null | 150.081967 | 9,758 | 0.874604 | [
[
[
"<a href=\"https://colab.research.google.com/github/iiineco/analysis/blob/master/matrix/decomposition/circular_law.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Circular law\n=============\n\nn × n ランダム行列における固有値の分布は、円状に一様に分布するという確率論。\n\npython を利用して Circular law を確認してみる。\n\n参考:https://en.wikipedia.org/wiki/Circular_law\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\ndef plt_circular_law(size):\n # [n × n] 行列作成\n n = size\n a_vec = np.random.rand(n*n) / np.sqrt(n)\n a_matrix= np.reshape(a_vec, (n, n))\n\n # 固有値 [1 × n]、固有ベクトル [n × n] を取得\n det, vec = np.linalg.eig(a_matrix)\n\n # diag: 固有値配列(det)の各要素を対角要素に持つ対角行列作成\n d_matrix = np.diag(det)\n\n # x 軸に実部、y 軸に虚部として表示\n plt.plot(d_matrix.real, d_matrix.imag, \"b.\")\n # グラフの縦横比を同じに設定\n plt.axis(\"equal\")\n # グラフの表示範囲指定\n plt.xlim(-1, 1)\n plt.ylim(-1, 1)\n plt.show()\n\n",
"_____no_output_____"
],
[
"\n\n# ランダム行列 [5 × 5] における固有値の場合\nplt_circular_law(5)",
"_____no_output_____"
],
[
"# ランダム行列 [10 × 10] における固有値の場合\nplt_circular_law(10)",
"_____no_output_____"
],
[
"# ランダム行列 [100 × 100] における固有値の場合\nplt_circular_law(100)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb5bba52465b55ac58bcefc98be2086676402bfc | 1,756 | ipynb | Jupyter Notebook | Interview Preparation Kit/13. Recursion and Backtracking/Recursion_Davis' Staircase.ipynb | Nam-SH/HackerRank | d1ced5cdad3eae7661f39af4d12aa33f460821cb | [
"MIT"
] | null | null | null | Interview Preparation Kit/13. Recursion and Backtracking/Recursion_Davis' Staircase.ipynb | Nam-SH/HackerRank | d1ced5cdad3eae7661f39af4d12aa33f460821cb | [
"MIT"
] | null | null | null | Interview Preparation Kit/13. Recursion and Backtracking/Recursion_Davis' Staircase.ipynb | Nam-SH/HackerRank | d1ced5cdad3eae7661f39af4d12aa33f460821cb | [
"MIT"
] | null | null | null | 20.904762 | 116 | 0.455581 | [
[
[
"# Recursion: Davis' Staircase\n\n<br>\n\n",
"_____no_output_____"
]
],
[
[
"#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\nmod = int(1e10 + 7)\ndp = [-1] * 100001\ndp[0] = dp[1] = 1\ndp[2] = 2\n\n# Complete the stepPerms function below.\n\n\ndef stepPerms(n):\n\n if dp[n] != -1:\n return dp[n]\n\n dp[n] = stepPerms(n - 1) + stepPerms(n - 2) + stepPerms(n - 3)\n dp[n] %= mod\n return dp[n]\n\n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n\n s = int(input())\n\n for s_itr in range(s):\n n = int(input())\n\n res = stepPerms(n)\n\n fptr.write(str(res) + '\\n')\n\n fptr.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
cb5bbacfba39c768cbb0d0bedd8bddc29a0eb4a9 | 18,860 | ipynb | Jupyter Notebook | analysis/notebooks/results_analysis.ipynb | cbrewitt/GRIT-OpenDrive | d8f8898e8fc360f4247aebcc91a855cd2659325f | [
"MIT"
] | null | null | null | analysis/notebooks/results_analysis.ipynb | cbrewitt/GRIT-OpenDrive | d8f8898e8fc360f4247aebcc91a855cd2659325f | [
"MIT"
] | null | null | null | analysis/notebooks/results_analysis.ipynb | cbrewitt/GRIT-OpenDrive | d8f8898e8fc360f4247aebcc91a855cd2659325f | [
"MIT"
] | null | null | null | 23.964422 | 1,161 | 0.56018 | [
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom igp2 import AgentState\nfrom igp2.data.data_loaders import InDDataLoader\nfrom igp2.data.episode import Frame\nfrom igp2.data.scenario import InDScenario, ScenarioConfig\nfrom igp2.opendrive.map import Map\nfrom igp2.opendrive.plot_map import plot_map\nfrom core.feature_extraction import FeatureExtractor, GoalDetector\nfrom core.goal_generator import GoalGenerator\nfrom core import feature_extraction\nprint(feature_extraction.__file__)",
"_____no_output_____"
],
[
"odr_results = pd.read_csv('../predictions/heckstrasse_trained_trees_test.csv')\nodr_results.shape",
"_____no_output_____"
],
[
"lanelet_results = pd.read_csv('../../GRIT-lanelet/predictions/heckstrasse_trained_trees_test.csv')\nlanelet_results.shape",
"_____no_output_____"
],
[
"odr_results = pd.read_csv('../predictions/frankenberg_trained_trees_test.csv')",
"_____no_output_____"
],
[
"odr_results",
"_____no_output_____"
],
[
"lanelet_results = pd.read_csv('../../GRIT-lanelet/predictions/frankenberg_trained_trees_test.csv')",
"_____no_output_____"
],
[
"lanelet_results",
"_____no_output_____"
],
[
"lanelet_results[['episode', 'agent_id']].drop_duplicates()",
"_____no_output_____"
],
[
"odr_results[['episode', 'agent_id']].drop_duplicates()",
"_____no_output_____"
],
[
"lanelet_agents = lanelet_results.agent_id.drop_duplicates()\nodr_agents = odr_results.agent_id.drop_duplicates()",
"_____no_output_____"
],
[
"lanelet_agents.isin(odr_agents).sum()",
"_____no_output_____"
],
[
"lanelet_agents.shape",
"_____no_output_____"
],
[
"odr_agents.isin(lanelet_agents).sum()",
"_____no_output_____"
],
[
"odr_agents.shape",
"_____no_output_____"
],
[
"lanelet_agents.loc[~lanelet_agents.isin(odr_agents)]",
"_____no_output_____"
],
[
"lanelet_results.loc[lanelet_results.agent_id==75]",
"_____no_output_____"
]
],
[
[
"Why does opendrive have more vehicles? e.g agent 1. \nDifferent goal locations? - yes\nBicycles are also included - needs fixing\n\nWhy are some agents included in lanelet2 but not odr? e.g. 75 - vehicle misses goal slightly to the right. Should goal detection be based on dist along lane rather than pointgoal? or increase goal radius to match lane width\n\nBaseline acc goes down at final point (1.0 of traj obs)",
"_____no_output_____"
]
],
[
[
"%cd ..",
"_____no_output_____"
],
[
"scenario_name = 'frankenberg'\nscenario_map = Map.parse_from_opendrive(f\"scenarios/maps/{scenario_name}.xodr\")\n\nscenario_config = ScenarioConfig.load(f\"scenarios/configs/{scenario_name}.json\")\nscenario = InDScenario(scenario_config)",
"_____no_output_____"
],
[
"episode_idx = 5\nepisode = scenario.load_episode(episode_idx)\nagent = episode.agents[75]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(10, 6))\nplot_map(scenario_map, ax=ax)\npath = agent.trajectory.path\nax.plot(path[:, 0], path[:, 1])\nax.plot(*scenario_config.goals[0], 'o')",
"_____no_output_____"
],
[
"scenario_name = 'heckstrasse'\nscenario_map = Map.parse_from_opendrive(f\"scenarios/maps/{scenario_name}.xodr\")\n\nscenario_config = ScenarioConfig.load(f\"scenarios/configs/{scenario_name}.json\")\nscenario = InDScenario(scenario_config)",
"_____no_output_____"
],
[
"goal_detector = GoalDetector(scenario.config.goals)",
"_____no_output_____"
],
[
"episode_idx = 0\nepisode = scenario.load_episode(episode_idx)",
"_____no_output_____"
],
[
"agent = episode.agents[0]",
"_____no_output_____"
],
[
"agent_goals, goal_frame_idxes = goal_detector.detect_goals(agent.trajectory)",
"_____no_output_____"
],
[
"agent_goals",
"_____no_output_____"
],
[
"trajectory = agent.trajectory",
"_____no_output_____"
],
[
"feature_extractor = FeatureExtractor(scenario_map)",
"_____no_output_____"
],
[
"for idx in range(0, len(agent.trajectory.path)):\n typed_goals = feature_extractor.get_typed_goals(agent.trajectory.slice(0, idx+1), scenario.config.goals)\n print(idx, [g is not None for g in typed_goals])",
"_____no_output_____"
],
[
"typed_goals",
"_____no_output_____"
],
[
"agent.trajectory.path[68]",
"_____no_output_____"
],
[
"ax = plot_map(scenario_map)\nax.plot(*agent.trajectory.path[68], 'o')\nax.plot([20],[-60], 'o')",
"_____no_output_____"
],
[
"scenario_map.lanes_at(agent.trajectory.path[0], max_distance=3)",
"_____no_output_____"
],
[
"lanes = scenario_map.lanes_within_angle(agent.trajectory.path[0],\n agent.trajectory.heading[0],\n threshold=np.pi/4, max_distance=3)\nprint(lanes)",
"_____no_output_____"
],
[
"ax = plot_map(scenario_map)\nfor lane in lanes:\n ax.plot(*list(zip(*[x for x in lane.midline.coords])))",
"_____no_output_____"
],
[
"goal_point = np.array((62.2, -47.3))\nidx = 70\nbest_lane = scenario_map.best_lane_at(agent.trajectory.path[idx],\n agent.trajectory.heading[idx],\n max_distance=3, goal_point=goal_point)\nprint(best_lane)",
"_____no_output_____"
],
[
"ax = plot_map(scenario_map)\nax.plot(*list(zip(*[x for x in best_lane.midline.coords])))",
"_____no_output_____"
],
[
"data = pd.read_csv('data/heckstrasse_e0.csv')",
"_____no_output_____"
],
[
"goals_10 = data.loc[data.fraction_observed==1.0].value_counts('agent_id')",
"_____no_output_____"
],
[
"goals_09 = data.loc[data.fraction_observed==0.9].value_counts('agent_id')",
"_____no_output_____"
],
[
"(goals_10 > goals_09).sum()",
"_____no_output_____"
],
[
"predictions = pd.read_csv('predictions/heckstrasse_prior_baseline_test.csv')\npredictions",
"_____no_output_____"
],
[
"predictions.loc[predictions.fraction_observed==1.0].model_correct.mean()",
"_____no_output_____"
],
[
"predictions.loc[predictions.fraction_observed==0.9].model_correct.mean()",
"_____no_output_____"
],
[
"idx = predictions.loc[predictions.fraction_observed==0.9].set_index('agent_id').model_correct \\\n != predictions.loc[predictions.fraction_observed==1.0].set_index('agent_id').model_correct",
"_____no_output_____"
],
[
"idx.loc[idx]",
"_____no_output_____"
],
[
"predictions.loc[predictions.agent_id==15]",
"_____no_output_____"
]
],
[
[
"Problem: Wrong goal type inferred at the last minute - why? G1 assigned goal type turn-left",
"_____no_output_____"
]
],
[
[
"data.loc[data.agent_id==15]",
"_____no_output_____"
],
[
"# lane id -1 on road 6, heckstrasse - detected as goal G1 - this must be junction NE to SE\n# take into account trajectory history when detecting lane? Is this done for lanelet2 GRIT? e.g. previous lanelet",
"_____no_output_____"
],
[
"ax = plot_map(scenario_map)\nlane = scenario_map.get_lane(7, -1)\nax.plot(*list(zip(*[x for x in lane.midline.coords])))\nax.plot([36.0], [-27.0], 'o')",
"_____no_output_____"
],
[
"heading = -0.6367160078810041\nspeed = 15.915689301070186",
"_____no_output_____"
],
[
"scenario_name = 'round'\nscenario_map = Map.parse_from_opendrive(f\"scenarios/maps/{scenario_name}.xodr\")\n\nscenario_config = ScenarioConfig.load(f\"scenarios/configs/{scenario_name}.json\")\nscenario = InDScenario(scenario_config)",
"_____no_output_____"
],
[
"episode_idx = 0\nepisode = scenario.load_episode(episode_idx)\n",
"_____no_output_____"
],
[
"print(len(agent_goals))\nfor g in agent_goals:\n print(g)",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
],
[
"odr_results = pd.read_csv('predictions/round_trained_trees_test.csv')\nodr_results.shape",
"_____no_output_____"
],
[
"lanelet_results = pd.read_csv('../GRIT-lanelet/predictions/round_trained_trees_test.csv')\nlanelet_results.shape",
"_____no_output_____"
],
[
"odr_results[['episode', 'agent_id', 'fraction_observed']]",
"_____no_output_____"
],
[
"# isin with multiple columns?",
"_____no_output_____"
],
[
"episode = 4\nfraction_observerd = 0.8\nodr_samples = odr_results.loc[(odr_results.episode == episode) \n & (odr_results.fraction_observed == fraction_observerd)].set_index('agent_id')\nlanelet_samples = lanelet_results.loc[(lanelet_results.episode == episode) \n & (lanelet_results.fraction_observed == fraction_observerd)].set_index('agent_id')",
"_____no_output_____"
],
[
"lanelet_samples",
"_____no_output_____"
],
[
"odr_samples",
"_____no_output_____"
],
[
"odr_samples = odr_samples.join(lanelet_samples.model_correct, rsuffix='_ll')",
"_____no_output_____"
],
[
"odr_samples.loc[odr_samples.model_correct != odr_samples.model_correct_ll]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5bd307bc535f33bab75ccd3f361730679b3a06 | 7,195 | ipynb | Jupyter Notebook | Ch3.ipynb | agrricha/agrricha.github.io | 02cc8f4b511565eef2e71a249d2e712c3f016d21 | [
"MIT"
] | null | null | null | Ch3.ipynb | agrricha/agrricha.github.io | 02cc8f4b511565eef2e71a249d2e712c3f016d21 | [
"MIT"
] | null | null | null | Ch3.ipynb | agrricha/agrricha.github.io | 02cc8f4b511565eef2e71a249d2e712c3f016d21 | [
"MIT"
] | null | null | null | 28.665339 | 389 | 0.540097 | [
[
[
"#!jupyter nbextension enable --py widgetsnbextension --sys-prefix\n#!jupyter serverextension enable voila --sys-prefix",
"_____no_output_____"
],
[
"%matplotlib widget\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom IPython.display import display, clear_output\n\noutput1 = widgets.Output()\noutput2 = widgets.Output()\noutput3 = widgets.Output()\noutput4 = widgets.Output()\n# create some x data\n#x = np.linspace(0, 2 * np.pi, 100)\n\ne1 = 0.25 #m\ne2 = 0.05\nk1 = 0.2 #W/(m.K)\nk2 = 0.1\nhe2 = 40\nQ=400 #W/m3\nLa = np.linspace(0,e1,10)\nLb = np.linspace(0,e2,10)\n#Ltot = np.linspace(0,e1+e2,20)\nTe = 300 #K\nqpp = Q*e1\n\ndef T_B(Qa,ea,h,eb,kb):\n q = Qa*ea\n Tbr = Te + q/h\n Tbl = Tbr + q*eb/kb\n return Tbr, Tbl\n\ndef T_A(Qa,ea,ka, h, eb, kb):\n x = np.linspace(0,ea,10)\n x2 = np.linspace(0,eb,10)\n Ta = []\n L = []\n TbR, TbL = T_B(Qa,ea,h,eb,kb)\n for i in range(len(x)):\n Ta.append(-Qa*x[i]*x[i]/(2*ka)+((TbL-Te)/ea+Q*ea/(2*ka))*x[i]+Te)\n L.append(x[i])\n for i in range(1, len(x2)):\n Ta.append((TbR-TbL)*(x[len(x)-1]+x2[i])/x2[len(x2)-1]+(TbL*(x[len(x)-1]+x2[len(x2)-1])-TbR*x[len(x)-1])/x2[len(x2)-1])\n L.append(x[len(x)-1]+x2[i])\n return L, Ta\n\nLtot, Ttot = T_A(Q,e1,k1, he2, e2, k2)\n\n# default line color\ninitial_color = '#FF00DD'\n\nwith output1:\n fig, ax = plt.subplots(constrained_layout=True, figsize=(8, 4))\n \n# move the toolbar to the bottom\nfig.canvas.toolbar_position = 'bottom'\nax.grid(True)\n#line, = ax.plot(x_list, q_fin(x_list,k1,hi1,he1), initial_color, label='Q')\nline, = ax.plot(Ltot, Ttot, color='b', label='T')\nax.set_xlim(0,0.425)\nax.set_ylim(299,400)\nax.set_xlabel('x')\nax.set_ylabel('T (K)')\nax.legend()",
"_____no_output_____"
],
[
"#output1",
"_____no_output_____"
],
[
"text_0 = widgets.HTML(value=\"<p>Consider the case of a composite wall - Wall A and Wall B with Wall A being heat generating. Assume there is not heat resistance between boundaries of Wall A and B. The left side of the Wall A is at ambient temperature of Tair = Tw = 300 K. Wall A has parameters La, Ka, with heat generation Q=400 W/m3. Wall B has parameters Lb, Kb, and h.</p>\")\n\nvbox_text = widgets.VBox([text_0])",
"_____no_output_____"
],
[
"# create some control elements\nla_slider = widgets.FloatSlider(value=e1, min=0.15, max=0.35, step=0.05, description='La')\nlb_slider = widgets.FloatSlider(value=e2, min=0.015, max=0.075, step=0.005, description='Lb')\nka_slider = widgets.FloatSlider(value=k1, min=0.05, max=2, step=0.05, description='Ka')\nkb_slider = widgets.FloatSlider(value=k2, min=0.05, max=2, step=0.05, description='Kb')\nheb_slider = widgets.FloatSlider(value=he2, min=30, max=50, step=1, description='h')\n#Q_slider = widgets.FloatSlider(value=Q, min=390, max=500, step=10, description='Q')\n\n# callback functions\ndef update1(change):\n \"\"\"redraw line (update plot)\"\"\"\n xnew, ynew = T_A(Q,la_slider.value,ka_slider.value, heb_slider.value, lb_slider.value, kb_slider.value)\n line.set_xdata(xnew)\n line.set_ydata(ynew)\n fig.canvas.draw()\n\nbutton = widgets.Button(description=\"Reset\")\ndef on_button_clicked(b):\n with output1:\n la_slider.value = e1\n lb_slider.value = e2\n ka_slider.value = k1\n kb_slider.value = k2\n heb_slider.value = he2\n\nbutton.on_click(on_button_clicked)\n\nla_slider.observe(update1, 'value')\nlb_slider.observe(update1, 'value')\nka_slider.observe(update1, 'value')\nkb_slider.observe(update1, 'value')\nheb_slider.observe(update1, 'value')\n#Q_slider.observe(update1, 'value')\n",
"_____no_output_____"
],
[
"clear_output()\ncontrols = widgets.VBox([vbox_text, la_slider, lb_slider, ka_slider, kb_slider, heb_slider, button])\npage = widgets.HBox([controls, output1])\ndisplay(page)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb5be81e12d79b3759b16195cc5e6f0fc0501563 | 31,356 | ipynb | Jupyter Notebook | T1/ContourPlot.ipynb | esteban-rs/Optimization-I-CIMAT-2021 | 2d95ef61363a4eb1e06d3ebaa9a5c63971013d44 | [
"MIT"
] | null | null | null | T1/ContourPlot.ipynb | esteban-rs/Optimization-I-CIMAT-2021 | 2d95ef61363a4eb1e06d3ebaa9a5c63971013d44 | [
"MIT"
] | null | null | null | T1/ContourPlot.ipynb | esteban-rs/Optimization-I-CIMAT-2021 | 2d95ef61363a4eb1e06d3ebaa9a5c63971013d44 | [
"MIT"
] | null | null | null | 213.306122 | 27,572 | 0.912999 | [
[
[
"# Tarea 1\n\n## Problema 1\n1. Sea $ f_1(x_1, x_2) = x_1^2 - x_2^2 $, $f_2(x_1, x_2) = 2x_1x_2$. Represente los conjuntos de nivel asociados con $ f_1(x_1, x_2) = 12 $ y $ f_2(x_1, x_2) = 16 $ en la misma gráfica usando Python. Indique sobre la figura los puntos $ x = [x_1, x_2]^T $ para los cuales $ f(x) = [f_1(x_1,x_2), f_2(x_1, x_2)]^T = [12, 16]^T $.\n",
"_____no_output_____"
]
],
[
[
"# Import Libraries\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np ",
"_____no_output_____"
],
[
"# Install shapely for Windows10\n# conda install -c conda-forge shapely\nfrom shapely import geometry",
"_____no_output_____"
],
[
"# functions\n# read arguments as arrays and return and array\ndef f1(x1:np.array, x2:np.array):\n return x1*x1 - x2*x2\n\ndef f2(x1:np.array, x2:np.array):\n return 2*x1*x2\n\n# find intersections\n# def findIntersection(contour1, contour2):\n # Get 2D point from contours\n # p1 = [(e[0], e[1]) for e in contour1.collections[0].get_paths()[0].vertices]\n # p2 = [(e[0], e[1]) for e in contour2.collections[0].get_paths()[0].vertices]\n\n # s1 = geometry.LineString(p1)\n # s2 = geometry.LineString(p2)\n\n # return s1.intersection(s2)",
"_____no_output_____"
],
[
"# Intervals for x1,x2\nx1 = np.linspace(-20, 20, 200)\nx2 = np.linspace(-20, 20, 200)\n\n# Build two-dimentional grids from x1, x2\nX, Y = np.meshgrid(x1, x2)\n\n# Set plot\nplt.title('Contour Plot')\ncontours_f1 = plt.contour(X, Y, f1(X, Y), colors = 'black', levels = [12])\ncontours_f2 = plt.contour(X, Y, f2(X, Y), colors = 'blue', levels = [16])\n\n# Text labels\nplt.clabel(contours_f1, inline = True, fontsize = 8)\nplt.clabel(contours_f2, inline = True, fontsize = 8)\n\n# Find Intersections and add to plot\n# a = findIntersection(contours_f1,contours_f2)\n# plt.scatter(a.x, a.y, color='r', s=50)\n# plt.scatter(-a.x, -a.y, color='r', s=50)\n\n# Intersections found in pdf\n# (-4,-2), (4,2)\nplt.scatter(-4,-2, color='r', s=50)\nplt.scatter(4, 2, color='r', s=50)\n\nplt.grid(True)\nplt.xlabel('x1')\nplt.ylabel('x2')\n\n# show everything\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb5bf52dfdd96857025524e165e74b80a266a1b7 | 30,763 | ipynb | Jupyter Notebook | preprocessing/impute_missing_values.ipynb | shawlu95/Data_Science_Toolbox | 3f5dd554667dab51b59a1668488243161786b5f1 | [
"MIT"
] | 41 | 2019-05-04T11:02:43.000Z | 2022-02-20T02:37:01.000Z | preprocessing/impute_missing_values.ipynb | shawlu95/Data_Science_Toolbox | 3f5dd554667dab51b59a1668488243161786b5f1 | [
"MIT"
] | null | null | null | preprocessing/impute_missing_values.ipynb | shawlu95/Data_Science_Toolbox | 3f5dd554667dab51b59a1668488243161786b5f1 | [
"MIT"
] | 16 | 2019-04-05T00:49:16.000Z | 2021-04-15T08:06:43.000Z | 30.824649 | 93 | 0.335598 | [
[
[
"import numpy as np\nimport pandas as pd\nfrom collections import Counter\nimport operator",
"_____no_output_____"
],
[
"df = pd.read_csv(\"datasets/titanic_data.csv\")\ndf.head()",
"_____no_output_____"
]
],
[
[
"* using mean: weakens correlation\n* using linear regression: amplifies correlation",
"_____no_output_____"
],
[
"#### Impute by Mean",
"_____no_output_____"
]
],
[
[
"def impute_avg(df, col):\n \"\"\"impute by average.\"\"\"\n df[col] = df[col].fillna(np.mean(df[col]))",
"_____no_output_____"
],
[
"# remember which rows have missing age\ndf_no_age = df[df[\"Age\"].isnull()]\n\nimpute_avg(df, 'Age')\n\n# check that age has been imputed\ndf.loc[df_no_age.index].head()",
"_____no_output_____"
]
],
[
[
"#### Impute by Mode",
"_____no_output_____"
]
],
[
[
"counted = dict(Counter(df[\"Cabin\"].dropna().values))\nmax(counted.items(), key=operator.itemgetter(1))",
"_____no_output_____"
],
[
"counted[\"B96 B98\"]",
"_____no_output_____"
],
[
"def impute_mode(df, col):\n \"\"\"impute by mode.\"\"\"\n counted = dict(Counter(df[col].dropna().values))\n mode = max(counted.items(), key=operator.itemgetter(1))[0]\n df[col] = df[col].fillna(mode)",
"_____no_output_____"
],
[
"df_no_cabin = df[df[\"Cabin\"].isnull()]\nimpute_mode(df, 'Cabin')\ndf.loc[df_no_cabin.index].head()",
"_____no_output_____"
],
[
"df_no_embark = df[df[\"Embarked\"].isnull()]\nimpute_mode(df, 'Embarked')\ndf.loc[df_no_embark.index].head()",
"_____no_output_____"
]
],
[
[
"### Better Way: Sci-kit Learn",
"_____no_output_____"
]
],
[
[
"from sklearn.impute import SimpleImputer\nimp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')\ndf = pd.read_csv(\"datasets/titanic_data.csv\")",
"_____no_output_____"
],
[
"df_no_age = df[df[\"Age\"].isnull()]\n\n# use nested lists, since fit_transform requires 2D input\ndf.Age = imp_mean.fit_transform(df[[\"Age\"]].values)\n\ndf.loc[df_no_age.index].head()",
"_____no_output_____"
],
[
"imp_mode = SimpleImputer(missing_values=np.nan, strategy='most_frequent')",
"_____no_output_____"
],
[
"df_no_cabin = df[df[\"Cabin\"].isnull()]\n\ndf.Cabin = imp_mode.fit_transform(df[[\"Cabin\"]].values)\n\ndf.loc[df_no_cabin.index].head()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb5bfd9a135e0aa9ef73b8e05b3163ff543aca9c | 277,481 | ipynb | Jupyter Notebook | updown_visualization_gmm.ipynb | RuiShu/cvae | 55d17d8b5327d3e6c3a25d8b733eda42da55212c | [
"MIT"
] | 98 | 2016-04-03T12:50:41.000Z | 2022-03-26T11:17:21.000Z | notebooks/updown_visualization_gmm.ipynb | RuiShu/cvae | 55d17d8b5327d3e6c3a25d8b733eda42da55212c | [
"MIT"
] | 1 | 2016-06-28T23:26:22.000Z | 2016-06-29T17:49:27.000Z | updown_visualization_gmm.ipynb | RuiShu/cvae | 55d17d8b5327d3e6c3a25d8b733eda42da55212c | [
"MIT"
] | 30 | 2016-10-22T02:59:58.000Z | 2022-03-26T11:17:22.000Z | 273.380296 | 33,950 | 0.706935 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
cb5c005e4a44c22d59640f1c187dfd6cd78fb8d2 | 27,469 | ipynb | Jupyter Notebook | catboost/tutorials/custom_loss/custom_loss_and_metric_tutorial.ipynb | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | catboost/tutorials/custom_loss/custom_loss_and_metric_tutorial.ipynb | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | catboost/tutorials/custom_loss/custom_loss_and_metric_tutorial.ipynb | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | 37.993084 | 205 | 0.560013 | [
[
[
"# $$User\\ Defined\\ Metrics\\ Tutorial$$",
"_____no_output_____"
],
[
"[](https://colab.research.google.com/github/catboost/tutorials/blob/master/custom_loss/custom_loss_and_metric_tutorial.ipynb)",
"_____no_output_____"
],
[
"# Contents\n* [1. Introduction](#1.\\-Introduction)\n* [2. Classification](#2.\\-Classification)\n* [3. Regression](#3.\\-Regression)\n* [4. Multiclassification](#4.\\-Multiclassification)",
"_____no_output_____"
],
[
"# 1. Introduction",
"_____no_output_____"
],
[
"CatBoost allows you to create and pass to model your own loss functions and metrics. To do this you should implement classes with specicial interfaces.",
"_____no_output_____"
],
[
"##### Interface for user defined objectives:",
"_____no_output_____"
]
],
[
[
"class UserDefinedObjective(object):\n def calc_ders_range(self, approxes, targets, weights):\n # approxes, targets, weights are indexed containers of floats\n # (containers which have only __len__ and __getitem__ defined).\n # weights parameter can be None.\n #\n # To understand what these parameters mean, assume that there is\n # a subset of your dataset that is currently being processed.\n # approxes contains current predictions for this subset,\n # targets contains target values you provided with the dataset.\n #\n # This function should return a list of pairs (der1, der2), where\n # der1 is the first derivative of the loss function with respect\n # to the predicted value, and der2 is the second derivative.\n pass\n \nclass UserDefinedMultiClassObjective(object):\n def calc_ders_multi(self, approxes, target, weight):\n # approxes - indexed container of floats with predictions \n # for each dimension of single object\n # target - contains a single expected value\n # weight - contains weight of the object\n #\n # This function should return a tuple (der1, der2), where\n # - der1 is a list-like object of first derivatives of the loss function with respect\n # to the predicted value for each dimension.\n # - der2 is a matrix of second derivatives.\n pass",
"_____no_output_____"
]
],
[
[
"##### Interface for user defined metrics:",
"_____no_output_____"
]
],
[
[
"class UserDefinedMetric(object):\n def is_max_optimal(self):\n # Returns whether great values of metric are better\n pass\n\n def evaluate(self, approxes, target, weight):\n # approxes is a list of indexed containers\n # (containers with only __len__ and __getitem__ defined),\n # one container per approx dimension.\n # Each container contains floats.\n # weight is a one dimensional indexed container.\n # target is a one dimensional indexed container.\n \n # weight parameter can be None.\n # Returns pair (error, weights sum)\n pass\n \n def get_final_error(self, error, weight):\n # Returns final value of metric based on error and weight\n pass",
"_____no_output_____"
]
],
[
[
"Below we consider examples of user defined metrics for different types of tasks. We will use the following variables:\n<center>$a$ - approx value</center>\n<center>$p$ - probability</center>\n<center>$t$ - target</center>\n<center>$w$ - weight</center>",
"_____no_output_____"
]
],
[
[
"# import neccessary packages\nfrom catboost import CatBoostClassifier, CatBoostRegressor\nimport numpy as np\nfrom sklearn.datasets import make_classification, make_regression\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
]
],
[
[
"# 2. Classification",
"_____no_output_____"
],
[
"Note: for binary classification problems approxes are not equal to probabilities. Probabilities are calculated from approxes using sigmoid function.\n<h4><center>$p=\\frac{1}{1 + e^{-a}}=\\frac{e^a}{1 + e^a}$</center></h4>\nAs an example, let's take Logloss metric which is defined by the following formula:\n<h4><center>$Logloss_i = -{w_i * (t_i * log(p_i) + (1 - t_i) * log(1 - p_i))}$</center></h4>\n<h4><center>$Logloss = \\frac{\\sum_{i=1}^{N}{Logloss_i}}{\\sum_{i=1}^{N}{w_i}}$</center></h4>\nThis metric has derivative and can be used as objective. The derivatives of Logloss for single object are defined by the following formulas:\n<h4><center>$\\frac{\\delta(Logloss_i)}{\\delta(a)} = w_i * (t_i - p_i)$</center></h4>\n<h4><center>$\\frac{\\delta^2(Logloss_i)}{\\delta(a^2)} = -w_i * p_i * (1 - p_i)$</center></h4>\nBelow you can see implemented Logloss objective and metric.",
"_____no_output_____"
]
],
[
[
"class LoglossObjective(object):\n def calc_ders_range(self, approxes, targets, weights):\n assert len(approxes) == len(targets)\n if weights is not None:\n assert len(weights) == len(approxes)\n \n result = []\n for index in range(len(targets)):\n e = np.exp(approxes[index])\n p = e / (1 + e)\n der1 = targets[index] - p\n der2 = -p * (1 - p)\n\n if weights is not None:\n der1 *= weights[index]\n der2 *= weights[index]\n\n result.append((der1, der2))\n return result",
"_____no_output_____"
],
[
"class LoglossMetric(object):\n def get_final_error(self, error, weight):\n return error / (weight + 1e-38)\n\n def is_max_optimal(self):\n return False\n\n def evaluate(self, approxes, target, weight):\n assert len(approxes) == 1\n assert len(target) == len(approxes[0])\n\n approx = approxes[0]\n\n error_sum = 0.0\n weight_sum = 0.0\n\n for i in range(len(approx)):\n e = np.exp(approx[i])\n p = e / (1 + e)\n w = 1.0 if weight is None else weight[i]\n weight_sum += w\n error_sum += -w * (target[i] * np.log(p) + (1 - target[i]) * np.log(1 - p))\n\n return error_sum, weight_sum",
"_____no_output_____"
]
],
[
[
"Below there are examples of training with built-in Logloss function and our Logloss objective and metric. As we can see, the results are the same.",
"_____no_output_____"
]
],
[
[
"X, y = make_classification(n_classes=2, random_state=0)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)",
"_____no_output_____"
],
[
"model1 = CatBoostClassifier(iterations=10, loss_function='Logloss', eval_metric='Logloss',\n learning_rate=0.03, bootstrap_type='Bayesian', boost_from_average=False,\n leaf_estimation_iterations=1, leaf_estimation_method='Gradient')\nmodel1.fit(X_train, y_train, eval_set=(X_test, y_test))",
"0:\tlearn: 0.6900380\ttest: 0.6907175\tbest: 0.6907175 (0)\ttotal: 49.5ms\tremaining: 446ms\n1:\tlearn: 0.6866060\ttest: 0.6873479\tbest: 0.6873479 (1)\ttotal: 51.8ms\tremaining: 207ms\n2:\tlearn: 0.6835392\ttest: 0.6852325\tbest: 0.6852325 (2)\ttotal: 54.1ms\tremaining: 126ms\n3:\tlearn: 0.6804590\ttest: 0.6829075\tbest: 0.6829075 (3)\ttotal: 56.4ms\tremaining: 84.6ms\n4:\tlearn: 0.6776740\ttest: 0.6816999\tbest: 0.6816999 (4)\ttotal: 58.6ms\tremaining: 58.6ms\n5:\tlearn: 0.6749116\ttest: 0.6794533\tbest: 0.6794533 (5)\ttotal: 61.8ms\tremaining: 41.2ms\n6:\tlearn: 0.6712701\ttest: 0.6772634\tbest: 0.6772634 (6)\ttotal: 65ms\tremaining: 27.8ms\n7:\tlearn: 0.6681755\ttest: 0.6747041\tbest: 0.6747041 (7)\ttotal: 68.2ms\tremaining: 17ms\n8:\tlearn: 0.6658881\ttest: 0.6732683\tbest: 0.6732683 (8)\ttotal: 71.3ms\tremaining: 7.93ms\n9:\tlearn: 0.6633931\ttest: 0.6720979\tbest: 0.6720979 (9)\ttotal: 73.7ms\tremaining: 0us\n\nbestTest = 0.6720978617\nbestIteration = 9\n\n"
],
[
"model2 = CatBoostClassifier(iterations=10, loss_function=LoglossObjective(), eval_metric=LoglossMetric(), \n learning_rate=0.03, bootstrap_type='Bayesian', boost_from_average=False,\n leaf_estimation_iterations=1, leaf_estimation_method='Gradient')\nmodel2.fit(X_train, y_train, eval_set=(X_test, y_test))",
"0:\tlearn: 0.6900380\ttest: 0.6907175\tbest: 0.6907175 (0)\ttotal: 4.36ms\tremaining: 39.2ms\n1:\tlearn: 0.6866060\ttest: 0.6873479\tbest: 0.6873479 (1)\ttotal: 9.44ms\tremaining: 37.8ms\n2:\tlearn: 0.6835392\ttest: 0.6852325\tbest: 0.6852325 (2)\ttotal: 15.2ms\tremaining: 35.5ms\n3:\tlearn: 0.6804590\ttest: 0.6829075\tbest: 0.6829075 (3)\ttotal: 19.8ms\tremaining: 29.6ms\n4:\tlearn: 0.6776740\ttest: 0.6816999\tbest: 0.6816999 (4)\ttotal: 24.5ms\tremaining: 24.5ms\n5:\tlearn: 0.6749116\ttest: 0.6794533\tbest: 0.6794533 (5)\ttotal: 29.2ms\tremaining: 19.5ms\n6:\tlearn: 0.6712701\ttest: 0.6772634\tbest: 0.6772634 (6)\ttotal: 34.8ms\tremaining: 14.9ms\n7:\tlearn: 0.6681755\ttest: 0.6747041\tbest: 0.6747041 (7)\ttotal: 40ms\tremaining: 10ms\n8:\tlearn: 0.6658881\ttest: 0.6732683\tbest: 0.6732683 (8)\ttotal: 45.2ms\tremaining: 5.03ms\n9:\tlearn: 0.6633931\ttest: 0.6720979\tbest: 0.6720979 (9)\ttotal: 50.6ms\tremaining: 0us\n\nbestTest = 0.6720978617\nbestIteration = 9\n\n"
]
],
[
[
"# 3. Regression",
"_____no_output_____"
],
[
"For regression approxes don't need any transformations. As an example of regression loss function and metric we take well-known RMSE which is defined by the following formulas:\n<h3><center>$RMSE = \\sqrt{\\frac{\\sum_{i=1}^{N}{w_i * (t_i - a_i)^2}}{\\sum_{i=1}^{N}{w_i}}}$</center></h3>\n<h4><center>$\\frac{\\delta(RMSE_i)}{\\delta(a)} = w_i * (t_i - a_i)$</center></h4>\n<h4><center>$\\frac{\\delta^2(RMSE_i)}{\\delta(a^2)} = -w_i$</center></h4>",
"_____no_output_____"
]
],
[
[
"class RmseObjective(object):\n def calc_ders_range(self, approxes, targets, weights):\n assert len(approxes) == len(targets)\n if weights is not None:\n assert len(weights) == len(approxes)\n \n result = []\n for index in range(len(targets)):\n der1 = targets[index] - approxes[index]\n der2 = -1\n\n if weights is not None:\n der1 *= weights[index]\n der2 *= weights[index]\n\n result.append((der1, der2))\n return result",
"_____no_output_____"
],
[
"class RmseMetric(object):\n def get_final_error(self, error, weight):\n return np.sqrt(error / (weight + 1e-38))\n\n def is_max_optimal(self):\n return False\n\n def evaluate(self, approxes, target, weight):\n assert len(approxes) == 1\n assert len(target) == len(approxes[0])\n\n approx = approxes[0]\n\n error_sum = 0.0\n weight_sum = 0.0\n\n for i in range(len(approx)):\n w = 1.0 if weight is None else weight[i]\n weight_sum += w\n error_sum += w * ((approx[i] - target[i])**2)\n\n return error_sum, weight_sum",
"_____no_output_____"
]
],
[
[
"Below there are examples of training with built-in RMSE function and our RMSE objective and metric. As we can see, the results are the same.",
"_____no_output_____"
]
],
[
[
"X, y = make_regression(random_state=0)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)",
"_____no_output_____"
],
[
"model1 = CatBoostRegressor(iterations=10, loss_function='RMSE', eval_metric='RMSE',\n learning_rate=0.03, bootstrap_type='Bayesian', boost_from_average=False,\n leaf_estimation_iterations=1, leaf_estimation_method='Gradient')\nmodel1.fit(X_train, y_train, eval_set=(X_test, y_test))",
"0:\tlearn: 128.6631656\ttest: 140.6536718\tbest: 140.6536718 (0)\ttotal: 3.86ms\tremaining: 34.8ms\n1:\tlearn: 128.0351695\ttest: 140.7369887\tbest: 140.6536718 (0)\ttotal: 8.51ms\tremaining: 34ms\n2:\tlearn: 126.7781283\ttest: 141.0444768\tbest: 140.6536718 (0)\ttotal: 11.6ms\tremaining: 27.2ms\n3:\tlearn: 125.7603646\ttest: 141.1458855\tbest: 140.6536718 (0)\ttotal: 15.9ms\tremaining: 23.8ms\n4:\tlearn: 124.6922146\ttest: 141.0856002\tbest: 140.6536718 (0)\ttotal: 18.6ms\tremaining: 18.6ms\n5:\tlearn: 123.6667350\ttest: 141.0495141\tbest: 140.6536718 (0)\ttotal: 21.1ms\tremaining: 14.1ms\n6:\tlearn: 122.7210914\ttest: 140.8511986\tbest: 140.6536718 (0)\ttotal: 23.7ms\tremaining: 10.2ms\n7:\tlearn: 121.8418528\ttest: 140.7646996\tbest: 140.6536718 (0)\ttotal: 26.3ms\tremaining: 6.58ms\n8:\tlearn: 121.0103984\ttest: 140.4834561\tbest: 140.4834561 (8)\ttotal: 28.9ms\tremaining: 3.21ms\n9:\tlearn: 119.9286951\ttest: 140.2935285\tbest: 140.2935285 (9)\ttotal: 31.5ms\tremaining: 0us\n\nbestTest = 140.2935285\nbestIteration = 9\n\n"
],
[
"model2 = CatBoostRegressor(iterations=10, loss_function=RmseObjective(), eval_metric=RmseMetric(),\n learning_rate=0.03, bootstrap_type='Bayesian', boost_from_average=False,\n leaf_estimation_iterations=1, leaf_estimation_method='Gradient')\nmodel2.fit(X_train, y_train, eval_set=(X_test, y_test))",
"0:\tlearn: 128.6631656\ttest: 140.6536718\tbest: 140.6536718 (0)\ttotal: 4.01ms\tremaining: 36.1ms\n1:\tlearn: 128.0351695\ttest: 140.7369887\tbest: 140.6536718 (0)\ttotal: 6.72ms\tremaining: 26.9ms\n2:\tlearn: 126.7781283\ttest: 141.0444768\tbest: 140.6536718 (0)\ttotal: 9.52ms\tremaining: 22.2ms\n3:\tlearn: 125.7603646\ttest: 141.1458855\tbest: 140.6536718 (0)\ttotal: 12.2ms\tremaining: 18.3ms\n4:\tlearn: 124.6922146\ttest: 141.0856002\tbest: 140.6536718 (0)\ttotal: 17.5ms\tremaining: 17.5ms\n5:\tlearn: 123.6667350\ttest: 141.0495141\tbest: 140.6536718 (0)\ttotal: 20.6ms\tremaining: 13.7ms\n6:\tlearn: 122.7210914\ttest: 140.8511986\tbest: 140.6536718 (0)\ttotal: 23.4ms\tremaining: 10ms\n7:\tlearn: 121.8418528\ttest: 140.7646996\tbest: 140.6536718 (0)\ttotal: 26.4ms\tremaining: 6.59ms\n8:\tlearn: 121.0103984\ttest: 140.4834561\tbest: 140.4834561 (8)\ttotal: 30.5ms\tremaining: 3.39ms\n9:\tlearn: 119.9286951\ttest: 140.2935285\tbest: 140.2935285 (9)\ttotal: 35.2ms\tremaining: 0us\n\nbestTest = 140.2935285\nbestIteration = 9\n\n"
]
],
[
[
"# 4. Multiclassification",
"_____no_output_____"
],
[
"Note: for multiclassification problems approxes are not equal to probabilities. Usually approxes are transformed to probabilities using Softmax function.\n<h3><center>$p_{i,c} = \\frac{e^{a_{i,c}}}{\\sum_{j=1}^k{e^{a_{i,j}}}}$</center></h3>\n<center>$p_{i,c}$ - the probability that $x_i$ belongs to class $c$</center>\n<center>$k$ - number of classes</center>\n<center>$a_{i,j}$ - approx for object $x_i$ for class $j$</center>",
"_____no_output_____"
],
[
"Let's implement MultiClass objective that is defined as follows:\n<h3><center>$MultiClass_i = w_i * \\log{p_{i,t_i}}$</center></h3>\n<h3><center>$MultiClass = \\frac{\\sum_{i=1}^{N}Multiclass_i}{\\sum_{i=1}^{N}w_i}$</center></h3>\n\n<h3><center>$\\frac{\\delta(Multiclass_i)}{\\delta{a_{i,c}}} = \\begin{cases} \nw_i-\\frac{w_i*e^{a_{i,c}}}{\\sum_{j=1}^{k}e^{a_{i,j}}}, & \\mbox{if } c = t_i \\\\ \n-\\frac{w_i*e^{a_{i,c}}}{\\sum_{j=1}^{k}e^{a_{i,j}}}, & \\mbox{if } c \\neq t_i \n\\end{cases}$</center></h3>\n\n<h3><center>$\\frac{\\delta^2(Multiclass_i)}{\\delta{a_{i,c_1}}\\delta{a_{i,c_2}}} = \\begin{cases} \n\\frac{w_i*e^{2*a_{i,c_1}}}{(\\sum_{j=1}^{k}e^{a_{i,j}})^2} - \\frac{w_i*e^{a_{i, c_1}}}{\\sum_{j=1}^{k}e^{a_{i,j}}}, & \\mbox{if } c_1 = c_2 \\\\ \n\\frac{w_i*e^{a_{i,c_1}+a_{i,c_2}}}{(\\sum_{j=1}^{k}e^{a_{i,j}})^2}, & \\mbox{if } c_1 \\neq c_2 \n\\end{cases}$</center></h3>",
"_____no_output_____"
]
],
[
[
"class MultiClassObjective(object):\n def calc_ders_multi(self, approx, target, weight):\n approx = np.array(approx) - max(approx)\n exp_approx = np.exp(approx)\n exp_sum = exp_approx.sum()\n grad = []\n hess = []\n for j in range(len(approx)):\n der1 = -exp_approx[j] / exp_sum\n if j == target:\n der1 += 1\n hess_row = []\n for j2 in range(len(approx)):\n der2 = exp_approx[j] * exp_approx[j2] / (exp_sum**2)\n if j2 == j:\n der2 -= exp_approx[j] / exp_sum\n hess_row.append(der2 * weight)\n \n grad.append(der1 * weight)\n hess.append(hess_row)\n \n return (grad, hess)",
"_____no_output_____"
],
[
"class AccuracyMetric(object):\n def get_final_error(self, error, weight):\n return error / (weight + 1e-38)\n\n def is_max_optimal(self):\n return True\n\n def evaluate(self, approxes, target, weight):\n best_class = np.argmax(approxes, axis=0)\n \n accuracy_sum = 0\n weight_sum = 0 \n\n for i in range(len(target)):\n w = 1.0 if weight is None else weight[i]\n weight_sum += w\n accuracy_sum += w * (best_class[i] == target[i])\n\n return accuracy_sum, weight_sum",
"_____no_output_____"
]
],
[
[
"Below there are examples of training with built-in MultiClass function and our MultiClass objective. As we can see, the results are the same.",
"_____no_output_____"
]
],
[
[
"X, y = make_classification(n_samples=1000, n_features=50, n_informative=40, n_classes=5, random_state=0)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)",
"_____no_output_____"
],
[
"model1 = CatBoostClassifier(iterations=10, loss_function='MultiClass', eval_metric='Accuracy',\n learning_rate=0.03, bootstrap_type='Bayesian', boost_from_average=False,\n leaf_estimation_iterations=1, leaf_estimation_method='Newton', classes_count=5)\nmodel1.fit(X_train, y_train, eval_set=(X_test, y_test))",
"0:\tlearn: 0.3706667\ttest: 0.2400000\tbest: 0.2400000 (0)\ttotal: 22.3ms\tremaining: 201ms\n1:\tlearn: 0.4813333\ttest: 0.2760000\tbest: 0.2760000 (1)\ttotal: 35.2ms\tremaining: 141ms\n2:\tlearn: 0.5400000\ttest: 0.3120000\tbest: 0.3120000 (2)\ttotal: 46.9ms\tremaining: 109ms\n3:\tlearn: 0.6026667\ttest: 0.3040000\tbest: 0.3120000 (2)\ttotal: 59.3ms\tremaining: 88.9ms\n4:\tlearn: 0.6573333\ttest: 0.3120000\tbest: 0.3120000 (2)\ttotal: 71.4ms\tremaining: 71.4ms\n5:\tlearn: 0.6933333\ttest: 0.3360000\tbest: 0.3360000 (5)\ttotal: 83.3ms\tremaining: 55.5ms\n6:\tlearn: 0.7000000\ttest: 0.3440000\tbest: 0.3440000 (6)\ttotal: 95.4ms\tremaining: 40.9ms\n7:\tlearn: 0.7040000\ttest: 0.3520000\tbest: 0.3520000 (7)\ttotal: 107ms\tremaining: 26.9ms\n8:\tlearn: 0.7293333\ttest: 0.3720000\tbest: 0.3720000 (8)\ttotal: 120ms\tremaining: 13.3ms\n9:\tlearn: 0.7600000\ttest: 0.3960000\tbest: 0.3960000 (9)\ttotal: 132ms\tremaining: 0us\n\nbestTest = 0.396\nbestIteration = 9\n\n"
],
[
"model2 = CatBoostClassifier(iterations=10, loss_function=MultiClassObjective(), eval_metric=AccuracyMetric(),\n learning_rate=0.03, bootstrap_type='Bayesian', boost_from_average=False,\n leaf_estimation_iterations=1, leaf_estimation_method='Newton', classes_count=5)\nmodel2.fit(X_train, y_train, eval_set=(X_test, y_test))",
"0:\tlearn: 0.3706667\ttest: 0.2520000\tbest: 0.2520000 (0)\ttotal: 217ms\tremaining: 1.95s\n1:\tlearn: 0.4813333\ttest: 0.2760000\tbest: 0.2760000 (1)\ttotal: 432ms\tremaining: 1.73s\n2:\tlearn: 0.5400000\ttest: 0.3120000\tbest: 0.3120000 (2)\ttotal: 649ms\tremaining: 1.51s\n3:\tlearn: 0.6026667\ttest: 0.3040000\tbest: 0.3120000 (2)\ttotal: 863ms\tremaining: 1.29s\n4:\tlearn: 0.6573333\ttest: 0.3120000\tbest: 0.3120000 (2)\ttotal: 1.08s\tremaining: 1.08s\n5:\tlearn: 0.6933333\ttest: 0.3360000\tbest: 0.3360000 (5)\ttotal: 1.3s\tremaining: 869ms\n6:\tlearn: 0.7000000\ttest: 0.3440000\tbest: 0.3440000 (6)\ttotal: 1.52s\tremaining: 653ms\n7:\tlearn: 0.7040000\ttest: 0.3520000\tbest: 0.3520000 (7)\ttotal: 1.75s\tremaining: 436ms\n8:\tlearn: 0.7293333\ttest: 0.3720000\tbest: 0.3720000 (8)\ttotal: 1.96s\tremaining: 218ms\n9:\tlearn: 0.7600000\ttest: 0.3960000\tbest: 0.3960000 (9)\ttotal: 2.18s\tremaining: 0us\n\nbestTest = 0.396\nbestIteration = 9\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb5c0071debfb0aa756f46b3d26f668f912a94a9 | 51,423 | ipynb | Jupyter Notebook | Sesiones/Sesion 3/oscar_peralta.ipynb | EsauPR/CIMAT-NLP | 8adcf52866a4709c4846e78d907e98bab273ea65 | [
"MIT"
] | null | null | null | Sesiones/Sesion 3/oscar_peralta.ipynb | EsauPR/CIMAT-NLP | 8adcf52866a4709c4846e78d907e98bab273ea65 | [
"MIT"
] | null | null | null | Sesiones/Sesion 3/oscar_peralta.ipynb | EsauPR/CIMAT-NLP | 8adcf52866a4709c4846e78d907e98bab273ea65 | [
"MIT"
] | null | null | null | 51,423 | 51,423 | 0.736888 | [
[
[
"# Nombre: Oscar Esaú Peralta Rosales\n## Procesamiento de Lenguaje Natural\n## Práctica 3: Bolsas de Términos y esquemas de pesado",
"_____no_output_____"
],
[
"### Lectura simple de datos",
"_____no_output_____"
]
],
[
[
"import os\nimport re\nimport math\n\nfrom keras.preprocessing.text import Tokenizer\n\ndef get_texts_from_file(path_corpus, path_truth):\n tr_txt = []\n tr_y = []\n with open(path_corpus, \"r\") as f_corpus, open(path_truth, \"r\") as f_truth:\n for twitt in f_corpus:\n tr_txt += [twitt]\n for label in f_truth:\n tr_y += [label] \n return tr_txt, tr_y\n\n",
"_____no_output_____"
],
[
"tr_txt, tr_y = get_texts_from_file(\"./mex_train.txt\", \"./mex_train_labels.txt\")",
"_____no_output_____"
]
],
[
[
"### Estadisticas Simples",
"_____no_output_____"
]
],
[
[
"tr_y = list(map(int, tr_y))",
"_____no_output_____"
],
[
"from collections import Counter\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nprint(Counter(tr_y))\nplt.hist(tr_y, bins=len(set(tr_y)))\nplt.ylabel('Users');\nplt.xlabel('Class');",
"Counter({0: 3563, 1: 1981})\n"
]
],
[
[
"# Un ojo a los datos",
"_____no_output_____"
]
],
[
[
"tr_txt[:10]",
"_____no_output_____"
]
],
[
[
"### Construcción simple del vocabulario",
"_____no_output_____"
]
],
[
[
"import nltk",
"_____no_output_____"
],
[
"corpus_palabras = []\nfor doc in tr_txt:\n corpus_palabras += doc.split()\n#print(corpus_palabras)\nfdist = nltk.FreqDist(corpus_palabras)",
"_____no_output_____"
],
[
"fdist",
"_____no_output_____"
],
[
"len(fdist)",
"_____no_output_____"
],
[
"def sortFreqDict(freqdict):\n aux = [(freqdict[key], key) for key in freqdict]\n aux.sort()\n aux.reverse()\n return aux",
"_____no_output_____"
],
[
"V = sortFreqDict(fdist)\nV = V[:5000]",
"_____no_output_____"
],
[
"dict_indices = dict()\ncont = 0\nfor weight, word in V:\n dict_indices[word] = cont\n cont += 1",
"_____no_output_____"
]
],
[
[
"### Bolsa de Términos",
"_____no_output_____"
]
],
[
[
"import numpy as np\ndef build_bow_tr(tr_txt, V, dict_indices):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=int)\n cont_doc = 0\n for tr in tr_txt:\n fdist_doc = nltk.FreqDist(tr.split())\n for word in fdist_doc:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = 1\n cont_doc += 1\n \n return BOW",
"_____no_output_____"
]
],
[
[
"### Debug?",
"_____no_output_____"
]
],
[
[
"tr_txt[10]",
"_____no_output_____"
],
[
"\nfdist_doc = nltk.FreqDist(tr_txt[10].split())",
"_____no_output_____"
],
[
"fdist_doc",
"_____no_output_____"
]
],
[
[
"### Bolsa de Terminos en Validación",
"_____no_output_____"
]
],
[
[
"BOW_tr=build_bow_tr(tr_txt, V, dict_indices)",
"_____no_output_____"
],
[
"print(V[:10])",
"[(3342, 'de'), (3336, 'que'), (2605, 'a'), (2417, 'la'), (2225, 'y'), (1743, 'no'), (1582, 'me'), (1285, 'el'), (1243, '@usuario'), (1184, 'en')]\n"
],
[
"import sys\nimport numpy\nnumpy.set_printoptions(threshold=sys.maxsize)\n#print(BOW[10])",
"_____no_output_____"
],
[
"val_txt, val_y = get_texts_from_file(\"./mex_val.txt\", \"./mex_val_labels.txt\")",
"_____no_output_____"
],
[
"val_y = list(map(int, val_y))",
"_____no_output_____"
],
[
"from collections import Counter\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nprint(Counter(val_y))\nplt.hist(val_y, bins=len(set(val_y)))\nplt.ylabel('Users');\nplt.xlabel('Class');",
"Counter({0: 397, 1: 219})\n"
],
[
"val_txt[:10]",
"_____no_output_____"
],
[
"BOW_val=build_bow_tr(val_txt, V, dict_indices)",
"_____no_output_____"
]
],
[
[
"### Clasificación",
"_____no_output_____"
]
],
[
[
"import csv\nimport argparse\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import accuracy_score, confusion_matrix, f1_score, precision_recall_fscore_support, roc_auc_score\nfrom sklearn import metrics, preprocessing\nimport numpy as np\n\nfrom sklearn import svm, datasets\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import GridSearchCV\n \nparameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_tr, tr_y) \n\ny_pred = grid.predict(BOW_val)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[327 70]\n [ 58 161]]\n precision recall f1-score support\n\n 0 0.85 0.82 0.84 397\n 1 0.70 0.74 0.72 219\n\n accuracy 0.79 616\n macro avg 0.77 0.78 0.78 616\nweighted avg 0.80 0.79 0.79 616\n\n"
]
],
[
[
"### Errores",
"_____no_output_____"
]
],
[
[
"incorrect = []\nfor e in zip(val_y,y_pred,range(len(val_y))):\n #print(e[0])\n #print(e[1])\n if e[0] != e[1]:\n incorrect += [e[2]]",
"_____no_output_____"
],
[
"for e in incorrect:\n case = e\n if \"madre\" in val_txt[case].strip():\n print(\"Texto: \", val_txt[case].strip())\n print(\"Truth: \", val_y[case])\n print(\"Pred: \", y_pred[case])\n #print(\"PredProba: \", y_pred_proba[case])",
"Texto: ya a cualquier prieto le dicen licenciado peludito tengan madre\nTruth: 1\nPred: 0\nTexto: ahora si a chingar a su madre la fecha fifa y ahora si a disfrutar de nuestra gloriosa liga mx.\nTruth: 0\nPred: 1\nTexto: por qué chingadasmadres no matan al rival puta madre!!! @usuario 😡😡😡😡😡😡😡😡😡\nTruth: 0\nPred: 1\nTexto: la neta... si yo fuera messi les dijera \"chinguen a su madre putos yo hice todo\".\nTruth: 0\nPred: 1\nTexto: qué pedo ..?? con el norte llueve o que verga ..??? mucho desmadre de tráfico y el metro del pitó !!!\nTruth: 0\nPred: 1\nTexto: pinche hocicón... ya acepta la voluntad de tu madre... sí es que tienes que lo dudo tu misma madre te mando a la verga del testamento...\nTruth: 1\nPred: 0\nTexto: no quería mentarte la madre tan temprano pinche maricón il \"buasap\" pinche millenial cacha moscas\nTruth: 1\nPred: 0\nTexto: los putos a chin.... a su madre! ya me harté de seguir todo el mundo seguiré lo que yo quiera.\nTruth: 1\nPred: 0\nTexto: @usuario no tienen abuela madre vergüenza....\nTruth: 1\nPred: 0\nTexto: firmado por toda la pandilla de @usuario a toda madre estos carnales saludos cuernavaca presente. yeah !!\nTruth: 0\nPred: 1\nTexto: dicen que el amor es de putos! y pues q puto resulte jajaja valí madre desde que la conocí!\nTruth: 0\nPred: 1\nTexto: ni madres si se quiere ir que se vaya alv. a rogar a la iglesia. no mereces miserias!!!\nTruth: 1\nPred: 0\nTexto: @usuario pa la madre!!!!!! han de ser de importacion\nTruth: 0\nPred: 1\nTexto: prácticamente lauren dijo \"me la pelan todos vayanse a chingar a su madre putos\" pero con estilo i lov mi reina\nTruth: 0\nPred: 1\nTexto: qué putas madres tienen en la cabeza para que se les antoje un cigarro antes de las 8:00 am??\nTruth: 0\nPred: 1\nTexto: ya chingo a su madre la güera loca de trump como lider munfial ya nadie confia en el ni en los gringos...juar...juar...juar...!!!\nTruth: 1\nPred: 0\nTexto: esto si es no tener madre no tener respeto por absolutamente nada. una cosa es ejecutar al enemigo pero matar con saña a civiles indigna. <url>\nTruth: 1\nPred: 0\nTexto: ah qué poca madre aretes y pestañas al perro que las usen sus pinches dueñas pars ver si se les quita lo ignorante. <url>\nTruth: 0\nPred: 1\nTexto: ¿ se refiere usted a la hibris de vicente fox y martita ? @usuario o ésa hibris le vale madre ?\nTruth: 0\nPred: 1\n"
]
],
[
[
"# Tu turno:\n## Realiza los siguientes ejercicios en esta clase:",
"_____no_output_____"
],
[
"## 1) Bolsa de Palabras con frecuencia y clasifique: Haga bolsa de palabras en dónde cada término tenga frecuencia bruta en lugar de pesado binario",
"_____no_output_____"
]
],
[
[
"def build_bow_tr_frec(tr_txt, V, dict_indices):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=int)\n cont_doc = 0\n for tr in tr_txt:\n fdist_doc = nltk.FreqDist(tr.split())\n for word in fdist_doc:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = fdist_doc[word]\n cont_doc += 1\n\n return BOW",
"_____no_output_____"
],
[
"BOW_txt_frec = build_bow_tr_frec(tr_txt, V, dict_indices)\nBOW_val_frec = build_bow_tr_frec(val_txt, V, dict_indices)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec, tr_y) \n\ny_pred = grid.predict(BOW_val_frec)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[335 62]\n [ 52 167]]\n precision recall f1-score support\n\n 0 0.87 0.84 0.85 397\n 1 0.73 0.76 0.75 219\n\n accuracy 0.81 616\n macro avg 0.80 0.80 0.80 616\nweighted avg 0.82 0.81 0.82 616\n\n"
]
],
[
[
"## 2) Bolsa de Palabras con frecuencia normalizada y clasifique: Haga bolsa de palabras en dónde cada término tenga frecuencia normalizada a sumar 1 por documento",
"_____no_output_____"
]
],
[
[
"def build_bow_tr_frec_norm(tr_txt, V, dict_indices):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=np.float64)\n cont_doc = 0\n for tr in tr_txt:\n fdist_doc = nltk.FreqDist(tr.split())\n len_fd = len(fdist_doc)\n for word in fdist_doc:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = fdist_doc[word] / len_fd\n cont_doc += 1\n\n return BOW",
"_____no_output_____"
],
[
"BOW_txt_frec_norm = build_bow_tr_frec_norm(tr_txt, V, dict_indices)\nBOW_val_frec_norm = build_bow_tr_frec_norm(val_txt, V, dict_indices)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec_norm, tr_y) \n\ny_pred = grid.predict(BOW_val_frec_norm)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[313 84]\n [ 54 165]]\n precision recall f1-score support\n\n 0 0.85 0.79 0.82 397\n 1 0.66 0.75 0.71 219\n\n accuracy 0.78 616\n macro avg 0.76 0.77 0.76 616\nweighted avg 0.79 0.78 0.78 616\n\n"
]
],
[
[
"## 3) Bolsa de Palabras Normalizada con la norma del vector (un vector unitario por documento)",
"_____no_output_____"
]
],
[
[
"def build_bow_tr_frec_norm(tr_txt, V, dict_indices):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=np.float64)\n cont_doc = 0\n for tr in tr_txt:\n fdist_doc = nltk.FreqDist(tr.split())\n len_fd = len(fdist_doc)\n for word in fdist_doc:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = fdist_doc[word] / len_fd\n cont_doc += 1\n \n for row in BOW:\n row /= np.linalg.norm(row) or 1\n\n return BOW",
"_____no_output_____"
],
[
"BOW_txt_frec_norm = build_bow_tr_frec_norm(tr_txt, V, dict_indices)\nBOW_val_frec_norm = build_bow_tr_frec_norm(val_txt, V, dict_indices)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec_norm, tr_y) \n\ny_pred = grid.predict(BOW_val_frec_norm)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[318 79]\n [ 54 165]]\n precision recall f1-score support\n\n 0 0.85 0.80 0.83 397\n 1 0.68 0.75 0.71 219\n\n accuracy 0.78 616\n macro avg 0.77 0.78 0.77 616\nweighted avg 0.79 0.78 0.79 616\n\n"
]
],
[
[
"## 4) Bolsa de Palabras con TFIDF y clasifique",
"_____no_output_____"
]
],
[
[
"def build_bow_tr_tfidf(tr_txt, V, dict_indices):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=np.float64)\n cont_doc = 0\n \n N = len(tr_txt)\n \n frecs_history = [nltk.FreqDist(tr.split()) for tr in tr_txt]\n \n for n_doc, tr in enumerate(tr_txt):\n fdist_doc = frecs_history[n_doc]\n for word in fdist_doc:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = fdist_doc[word] / len(fdist_doc)\n for fh in frecs_history:\n count = 0\n if word in fh:\n count +=1\n BOW[cont_doc, dict_indices[word]] *= math.log(len(tr_txt) / (count + 1))\n cont_doc += 1\n \n return BOW",
"_____no_output_____"
],
[
"BOW_txt_frec_norm = build_bow_tr_tfidf(tr_txt, V, dict_indices)\nBOW_val_frec_norm = build_bow_tr_tfidf(val_txt, V, dict_indices)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec_norm, tr_y) \n\ny_pred = grid.predict(BOW_val_frec_norm)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[344 53]\n [ 72 147]]\n precision recall f1-score support\n\n 0 0.83 0.87 0.85 397\n 1 0.73 0.67 0.70 219\n\n accuracy 0.80 616\n macro avg 0.78 0.77 0.77 616\nweighted avg 0.79 0.80 0.79 616\n\n"
]
],
[
[
" ## 5) (Opcional) Mismo que anterior pero normalizando TFIDF con la norma del vector",
"_____no_output_____"
]
],
[
[
"def build_bow_tr_tfidf_norm(tr_txt, V, dict_indices):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=np.float64)\n cont_doc = 0\n \n N = len(tr_txt)\n \n frecs_history = [nltk.FreqDist(tr.split()) for tr in tr_txt]\n \n for n_doc, tr in enumerate(tr_txt):\n fdist_doc = frecs_history[n_doc]\n for word in fdist_doc:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = fdist_doc[word] / len(fdist_doc)\n for fh in frecs_history:\n count = 0\n if word in fh:\n count +=1\n BOW[cont_doc, dict_indices[word]] *= math.log(len(tr_txt) / (count + 1))\n cont_doc += 1\n \n for row in BOW:\n row /= np.linalg.norm(row) or 1\n \n return BOW",
"_____no_output_____"
],
[
"BOW_txt_frec_norm = build_bow_tr_tfidf_norm(tr_txt, V, dict_indices)\nBOW_val_frec_norm = build_bow_tr_tfidf_norm(val_txt, V, dict_indices)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec_norm, tr_y) \n\ny_pred = grid.predict(BOW_val_frec_norm)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[317 80]\n [ 54 165]]\n precision recall f1-score support\n\n 0 0.85 0.80 0.83 397\n 1 0.67 0.75 0.71 219\n\n accuracy 0.78 616\n macro avg 0.76 0.78 0.77 616\nweighted avg 0.79 0.78 0.78 616\n\n"
]
],
[
[
"## 6) (Opcional) Bolsa de Palabras con Top 10000 palabras más frecuentes",
"_____no_output_____"
]
],
[
[
"def build_bow_top(tr_txt, V, dict_indices, top=1000, jump=0):\n BOW = np.zeros((len(tr_txt),len(V)), dtype=int)\n cont_doc = 0\n for tr in tr_txt:\n fdist_doc = nltk.FreqDist(tr.split())\n fdist_doc_sorted = sorted(fdist_doc.items(), key=lambda item: item[1], reverse=True)\n for word, _ in fdist_doc_sorted[jump:jump+top]:\n if word in dict_indices:\n BOW[cont_doc, dict_indices[word]] = fdist_doc[word]\n cont_doc += 1\n\n return BOW",
"_____no_output_____"
],
[
"BOW_txt_frec_norm = build_bow_top(tr_txt, V, dict_indices, top=10000)\nBOW_val_frec_norm = build_bow_top(val_txt, V, dict_indices, top=10000)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec_norm, tr_y) \n\ny_pred = grid.predict(BOW_val_frec_norm)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[335 62]\n [ 52 167]]\n precision recall f1-score support\n\n 0 0.87 0.84 0.85 397\n 1 0.73 0.76 0.75 219\n\n accuracy 0.81 616\n macro avg 0.80 0.80 0.80 616\nweighted avg 0.82 0.81 0.82 616\n\n"
]
],
[
[
"## 7) (Opcional) Bolsa de Palabras descartando las top 1000 palabras más frecuentes y tomando las siguientes 5000",
"_____no_output_____"
]
],
[
[
"BOW_txt_frec_norm = build_bow_top(tr_txt, V, dict_indices, top=5000, jump=1000)\nBOW_val_frec_norm = build_bow_top(val_txt, V, dict_indices, top=5000, jump=1000)",
"_____no_output_____"
],
[
"parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}\n\nsvr = svm.LinearSVC(class_weight='balanced')\ngrid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring=\"f1_macro\", cv=5)\n\ngrid.fit(BOW_txt_frec_norm, tr_y) \n\ny_pred = grid.predict(BOW_val_frec_norm)\n\np, r, f, _ = precision_recall_fscore_support(val_y, y_pred, average='macro', pos_label=None)\n\nprint(confusion_matrix(val_y, y_pred))\nprint(metrics.classification_report(val_y, y_pred))",
"[[ 0 397]\n [ 0 219]]\n precision recall f1-score support\n\n 0 0.00 0.00 0.00 397\n 1 0.36 1.00 0.52 219\n\n accuracy 0.36 616\n macro avg 0.18 0.50 0.26 616\nweighted avg 0.13 0.36 0.19 616\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb5c015dbb7315b9616c18efce7de8fac9e74dab | 842,837 | ipynb | Jupyter Notebook | 11-Extended-Kalman-Filters.ipynb | jonhilgart22/Kalman-and-Bayesian-Filters-in-Python | 92068f03c7c34f76d0580dce773da7db76054894 | [
"CC-BY-4.0"
] | 2 | 2020-12-10T03:13:06.000Z | 2021-07-20T23:46:05.000Z | 11-Extended-Kalman-Filters.ipynb | jonhilgart22/Kalman-and-Bayesian-Filters-in-Python | 92068f03c7c34f76d0580dce773da7db76054894 | [
"CC-BY-4.0"
] | null | null | null | 11-Extended-Kalman-Filters.ipynb | jonhilgart22/Kalman-and-Bayesian-Filters-in-Python | 92068f03c7c34f76d0580dce773da7db76054894 | [
"CC-BY-4.0"
] | 2 | 2020-05-01T10:24:14.000Z | 2022-01-26T16:25:36.000Z | 504.089115 | 90,572 | 0.93044 | [
[
[
"[Table of Contents](./table_of_contents.ipynb)",
"_____no_output_____"
],
[
"# The Extended Kalman Filter",
"_____no_output_____"
]
],
[
[
"from __future__ import division, print_function\n%matplotlib inline",
"_____no_output_____"
],
[
"#format the book\nimport book_format\nbook_format.set_style()",
"_____no_output_____"
]
],
[
[
"We have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique. \n\nThe EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature. ",
"_____no_output_____"
],
[
"## Linearizing the Kalman Filter\n\nThe Kalman filter uses linear equations, so it does not work with nonlinear problems. Problems can be nonlinear in two ways. First, the process model might be nonlinear. An object falling through the atmosphere encounters drag which reduces its acceleration. The drag coefficient varies based on the velocity the object. The resulting behavior is nonlinear - it cannot be modeled with linear equations. Second, the measurements could be nonlinear. For example, a radar gives a range and bearing to a target. We use trigonometry, which is nonlinear, to compute the position of the target.\n\nFor the linear filter we have these equations for the process and measurement models:\n\n$$\\begin{aligned}\\dot{\\mathbf x} &= \\mathbf{Ax} + w_x\\\\\n\\mathbf z &= \\mathbf{Hx} + w_z\n\\end{aligned}$$\n\nWhere $\\mathbf A$ is the systems dynamic matrix. Using the state space methods covered in the **Kalman Filter Math** chapter these equations can be tranformed into \n$$\\begin{aligned}\\bar{\\mathbf x} &= \\mathbf{Fx} \\\\\n\\mathbf z &= \\mathbf{Hx}\n\\end{aligned}$$\n\nwhere $\\mathbf F$ is the *fundamental matrix*. The noise $w_x$ and $w_z$ terms are incorporated into the matrices $\\mathbf R$ and $\\mathbf Q$. This form of the equations allow us to compute the state at step $k$ given a measurement at step $k$ and the state estimate at step $k-1$. In earlier chapters I built your intuition and minimized the math by using problems describable with Newton's equations. We know how to design $\\mathbf F$ based on high school physics.\n\n\nFor the nonlinear model the linear expression $\\mathbf{Fx} + \\mathbf{Bu}$ is replaced by a nonlinear function $f(\\mathbf x, \\mathbf u)$, and the linear expression $\\mathbf{Hx}$ is replaced by a nonlinear function $h(\\mathbf x)$:\n\n$$\\begin{aligned}\\dot{\\mathbf x} &= f(\\mathbf x, \\mathbf u) + w_x\\\\\n\\mathbf z &= h(\\mathbf x) + w_z\n\\end{aligned}$$\n\nYou might imagine that we could proceed by finding a new set of Kalman filter equations that optimally solve these equations. But if you remember the charts in the **Nonlinear Filtering** chapter you'll recall that passing a Gaussian through a nonlinear function results in a probability distribution that is no longer Gaussian. So this will not work.\n\nThe EKF does not alter the Kalman filter's linear equations. Instead, it *linearizes* the nonlinear equations at the point of the current estimate, and uses this linearization in the linear Kalman filter. \n\n*Linearize* means what it sounds like. We find a line that most closely matches the curve at a defined point. The graph below linearizes the parabola $f(x)=x^2-2x$ at $x=1.5$.",
"_____no_output_____"
]
],
[
[
"import kf_book.ekf_internal as ekf_internal\nekf_internal.show_linearization()",
"_____no_output_____"
]
],
[
[
"If the curve above is the process model, then the dotted lines shows the linearization of that curve for the estimate $x=1.5$.\n\nWe linearize systems by taking the derivative, which finds the slope of a curve:\n\n$$\\begin{aligned}\nf(x) &= x^2 -2x \\\\\n\\frac{df}{dx} &= 2x - 2\n\\end{aligned}$$\n\nand then evaluating it at $x$:\n\n$$\\begin{aligned}m &= f'(x=1.5) \\\\&= 2(1.5) - 2 \\\\&= 1\\end{aligned}$$ \n\nLinearizing systems of differential equations is similar. We linearize $f(\\mathbf x, \\mathbf u)$, and $h(\\mathbf x)$ by taking the partial derivatives of each to evaluate $\\mathbf F$ and $\\mathbf H$ at the point $\\mathbf x_t$ and $\\mathbf u_t$. We call the partial derivative of a matrix the [*Jacobian*](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant). This gives us the the discrete state transition matrix and measurement model matrix:\n\n$$\n\\begin{aligned}\n\\mathbf F \n&= {\\frac{\\partial{f(\\mathbf x_t, \\mathbf u_t)}}{\\partial{\\mathbf x}}}\\biggr|_{{\\mathbf x_t},{\\mathbf u_t}} \\\\\n\\mathbf H &= \\frac{\\partial{h(\\bar{\\mathbf x}_t)}}{\\partial{\\bar{\\mathbf x}}}\\biggr|_{\\bar{\\mathbf x}_t} \n\\end{aligned}\n$$\n\nThis leads to the following equations for the EKF. I put boxes around the differences from the linear filter:\n\n$$\\begin{array}{l|l}\n\\text{linear Kalman filter} & \\text{EKF} \\\\\n\\hline \n& \\boxed{\\mathbf F = {\\frac{\\partial{f(\\mathbf x_t, \\mathbf u_t)}}{\\partial{\\mathbf x}}}\\biggr|_{{\\mathbf x_t},{\\mathbf u_t}}} \\\\\n\\mathbf{\\bar x} = \\mathbf{Fx} + \\mathbf{Bu} & \\boxed{\\mathbf{\\bar x} = f(\\mathbf x, \\mathbf u)} \\\\\n\\mathbf{\\bar P} = \\mathbf{FPF}^\\mathsf{T}+\\mathbf Q & \\mathbf{\\bar P} = \\mathbf{FPF}^\\mathsf{T}+\\mathbf Q \\\\\n\\hline\n& \\boxed{\\mathbf H = \\frac{\\partial{h(\\bar{\\mathbf x}_t)}}{\\partial{\\bar{\\mathbf x}}}\\biggr|_{\\bar{\\mathbf x}_t}} \\\\\n\\textbf{y} = \\mathbf z - \\mathbf{H \\bar{x}} & \\textbf{y} = \\mathbf z - \\boxed{h(\\bar{x})}\\\\\n\\mathbf{K} = \\mathbf{\\bar{P}H}^\\mathsf{T} (\\mathbf{H\\bar{P}H}^\\mathsf{T} + \\mathbf R)^{-1} & \\mathbf{K} = \\mathbf{\\bar{P}H}^\\mathsf{T} (\\mathbf{H\\bar{P}H}^\\mathsf{T} + \\mathbf R)^{-1} \\\\\n\\mathbf x=\\mathbf{\\bar{x}} +\\mathbf{K\\textbf{y}} & \\mathbf x=\\mathbf{\\bar{x}} +\\mathbf{K\\textbf{y}} \\\\\n\\mathbf P= (\\mathbf{I}-\\mathbf{KH})\\mathbf{\\bar{P}} & \\mathbf P= (\\mathbf{I}-\\mathbf{KH})\\mathbf{\\bar{P}}\n\\end{array}$$\n\nWe don't normally use $\\mathbf{Fx}$ to propagate the state for the EKF as the linearization causes inaccuracies. It is typical to compute $\\bar{\\mathbf x}$ using a suitable numerical integration technique such as Euler or Runge Kutta. Thus I wrote $\\mathbf{\\bar x} = f(\\mathbf x, \\mathbf u)$. For the same reasons we don't use $\\mathbf{H\\bar{x}}$ in the computation for the residual, opting for the more accurate $h(\\bar{\\mathbf x})$.\n\nI think the easiest way to understand the EKF is to start off with an example. Later you may want to come back and reread this section.",
"_____no_output_____"
],
[
"## Example: Tracking a Airplane\n\nThis example tracks an airplane using ground based radar. We implemented a UKF for this problem in the last chapter. Now we will implement an EKF for the same problem so we can compare both the filter performance and the level of effort required to implement the filter.\n\nRadars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object.\n\nThe relationship between the radar's slant range distance $r$ and elevation angle $\\epsilon$ with the horizontal position $x$ and altitude $y$ of the aircraft is illustrated in the figure below:",
"_____no_output_____"
]
],
[
[
"ekf_internal.show_radar_chart()",
"_____no_output_____"
]
],
[
[
"This gives us the equalities:\n\n$$\\begin{aligned}\n\\epsilon &= \\tan^{-1} \\frac y x\\\\\nr^2 &= x^2 + y^2\n\\end{aligned}$$ ",
"_____no_output_____"
],
[
"### Design the State Variables\n\nWe want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, horizonal velocity, and altitude:\n\n$$\\mathbf x = \\begin{bmatrix}\\mathtt{distance} \\\\\\mathtt{velocity}\\\\ \\mathtt{altitude}\\end{bmatrix}= \\begin{bmatrix}x \\\\ \\dot x\\\\ y\\end{bmatrix}$$",
"_____no_output_____"
],
[
"### Design the Process Model\n\nWe assume a Newtonian, kinematic system for the aircraft. We've used this model in previous chapters, so by inspection you may recognize that we want\n\n$$\\mathbf F = \\left[\\begin{array}{cc|c} 1 & \\Delta t & 0\\\\\n0 & 1 & 0 \\\\ \\hline\n0 & 0 & 1\\end{array}\\right]$$\n\nI've partioned the matrix into blocks to show the upper left block is a constant velocity model for $x$, and the lower right block is a constant position model for $y$.\n\nHowever, let's practice finding these matrices. We model systems with a set of differential equations. We need an equation in the form \n\n$$\\dot{\\mathbf x} = \\mathbf{Ax} + \\mathbf{w}$$\nwhere $\\mathbf{w}$ is the system noise. \n\nThe variables $x$ and $y$ are independent so we can compute them separately. The differential equations for motion in one dimension are:\n\n$$\\begin{aligned}v &= \\dot x \\\\\na &= \\ddot{x} = 0\\end{aligned}$$\n\nNow we put the differential equations into state-space form. If this was a second or greater order differential system we would have to first reduce them to an equivalent set of first degree equations. The equations are first order, so we put them in state space matrix form as\n\n$$\\begin{aligned}\\begin{bmatrix}\\dot x \\\\ \\ddot{x}\\end{bmatrix} &= \\begin{bmatrix}0&1\\\\0&0\\end{bmatrix} \\begin{bmatrix}x \\\\ \n\\dot x\\end{bmatrix} \\\\ \\dot{\\mathbf x} &= \\mathbf{Ax}\\end{aligned}$$\nwhere $\\mathbf A=\\begin{bmatrix}0&1\\\\0&0\\end{bmatrix}$. \n\nRecall that $\\mathbf A$ is the *system dynamics matrix*. It describes a set of linear differential equations. From it we must compute the state transition matrix $\\mathbf F$. $\\mathbf F$ describes a discrete set of linear equations which compute $\\mathbf x$ for a discrete time step $\\Delta t$.\n\nA common way to compute $\\mathbf F$ is to use the power series expansion of the matrix exponential:\n\n$$\\mathbf F(\\Delta t) = e^{\\mathbf A\\Delta t} = \\mathbf{I} + \\mathbf A\\Delta t + \\frac{(\\mathbf A\\Delta t)^2}{2!} + \\frac{(\\mathbf A \\Delta t)^3}{3!} + ... $$\n\n\n$\\mathbf A^2 = \\begin{bmatrix}0&0\\\\0&0\\end{bmatrix}$, so all higher powers of $\\mathbf A$ are also $\\mathbf{0}$. Thus the power series expansion is:\n\n$$\n\\begin{aligned}\n\\mathbf F &=\\mathbf{I} + \\mathbf At + \\mathbf{0} \\\\\n&= \\begin{bmatrix}1&0\\\\0&1\\end{bmatrix} + \\begin{bmatrix}0&1\\\\0&0\\end{bmatrix}\\Delta t\\\\\n\\mathbf F &= \\begin{bmatrix}1&\\Delta t\\\\0&1\\end{bmatrix}\n\\end{aligned}$$\n\nThis is the same result used by the kinematic equations! This exercise was unnecessary other than to illustrate finding the state transition matrix from linear differential equations. We will conclude the chapter with an example that will require the use of this technique.",
"_____no_output_____"
],
[
"### Design the Measurement Model\n\nThe measurement function takes the state estimate of the prior $\\bar{\\mathbf x}$ and turn it into a measurement of the slant range distance. We use the Pythagorean theorem to derive:\n\n$$h(\\bar{\\mathbf x}) = \\sqrt{x^2 + y^2}$$\n\nThe relationship between the slant distance and the position on the ground is nonlinear due to the square root. We linearize it by evaluating its partial derivative at $\\mathbf x_t$:\n\n$$\n\\mathbf H = \\frac{\\partial{h(\\bar{\\mathbf x})}}{\\partial{\\bar{\\mathbf x}}}\\biggr|_{\\bar{\\mathbf x}_t}\n$$\n\nThe partial derivative of a matrix is called a Jacobian, and takes the form \n\n$$\\frac{\\partial \\mathbf H}{\\partial \\bar{\\mathbf x}} = \n\\begin{bmatrix}\n\\frac{\\partial h_1}{\\partial x_1} & \\frac{\\partial h_1}{\\partial x_2} &\\dots \\\\\n\\frac{\\partial h_2}{\\partial x_1} & \\frac{\\partial h_2}{\\partial x_2} &\\dots \\\\\n\\vdots & \\vdots\n\\end{bmatrix}\n$$\n\nIn other words, each element in the matrix is the partial derivative of the function $h$ with respect to the $x$ variables. For our problem we have\n\n$$\\mathbf H = \\begin{bmatrix}{\\partial h}/{\\partial x} & {\\partial h}/{\\partial \\dot{x}} & {\\partial h}/{\\partial y}\\end{bmatrix}$$\n\nSolving each in turn:\n\n$$\\begin{aligned}\n\\frac{\\partial h}{\\partial x} &= \\frac{\\partial}{\\partial x} \\sqrt{x^2 + y^2} \\\\\n&= \\frac{x}{\\sqrt{x^2 + y^2}}\n\\end{aligned}$$\n\nand\n\n$$\\begin{aligned}\n\\frac{\\partial h}{\\partial \\dot{x}} &=\n\\frac{\\partial}{\\partial \\dot{x}} \\sqrt{x^2 + y^2} \\\\ \n&= 0\n\\end{aligned}$$\n\nand\n\n$$\\begin{aligned}\n\\frac{\\partial h}{\\partial y} &= \\frac{\\partial}{\\partial y} \\sqrt{x^2 + y^2} \\\\ \n&= \\frac{y}{\\sqrt{x^2 + y^2}}\n\\end{aligned}$$\n\ngiving us \n\n$$\\mathbf H = \n\\begin{bmatrix}\n\\frac{x}{\\sqrt{x^2 + y^2}} & \n0 &\n&\n\\frac{y}{\\sqrt{x^2 + y^2}}\n\\end{bmatrix}$$\n\nThis may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\\mathbf H$. As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\\mathbf x$ so we need to take the derivative of the slant range with respect to $\\mathbf x$. For the linear Kalman filter $\\mathbf H$ was a constant that we computed prior to running the filter. For the EKF $\\mathbf H$ is updated at each step as the evaluation point $\\bar{\\mathbf x}$ changes at each epoch.\n\nTo make this more concrete, let's now write a Python function that computes the Jacobian of $h$ for this problem.",
"_____no_output_____"
]
],
[
[
"from math import sqrt\ndef HJacobian_at(x):\n \"\"\" compute Jacobian of H matrix at x \"\"\"\n\n horiz_dist = x[0]\n altitude = x[2]\n denom = sqrt(horiz_dist**2 + altitude**2)\n return array ([[horiz_dist/denom, 0., altitude/denom]])",
"_____no_output_____"
]
],
[
[
"Finally, let's provide the code for $h(\\bar{\\mathbf x})$:",
"_____no_output_____"
]
],
[
[
"def hx(x):\n \"\"\" compute measurement for slant range that\n would correspond to state x.\n \"\"\"\n \n return (x[0]**2 + x[2]**2) ** 0.5",
"_____no_output_____"
]
],
[
[
"Now let's write a simulation for our radar.",
"_____no_output_____"
]
],
[
[
"from numpy.random import randn\nimport math\n\nclass RadarSim:\n \"\"\" Simulates the radar signal returns from an object\n flying at a constant altityude and velocity in 1D. \n \"\"\"\n \n def __init__(self, dt, pos, vel, alt):\n self.pos = pos\n self.vel = vel\n self.alt = alt\n self.dt = dt\n \n def get_range(self):\n \"\"\" Returns slant range to the object. Call once \n for each new measurement at dt time from last call.\n \"\"\"\n \n # add some process noise to the system\n self.vel = self.vel + .1*randn()\n self.alt = self.alt + .1*randn()\n self.pos = self.pos + self.vel*self.dt\n \n # add measurement noise\n err = self.pos * 0.05*randn()\n slant_dist = math.sqrt(self.pos**2 + self.alt**2)\n \n return slant_dist + err",
"_____no_output_____"
]
],
[
[
"### Design Process and Measurement Noise\n\nThe radar measures the range to a target. We will use $\\sigma_{range}= 5$ meters for the noise. This gives us\n\n$$\\mathbf R = \\begin{bmatrix}\\sigma_{range}^2\\end{bmatrix} = \\begin{bmatrix}25\\end{bmatrix}$$\n\n\nThe design of $\\mathbf Q$ requires some discussion. The state $\\mathbf x= \\begin{bmatrix}x & \\dot x & y\\end{bmatrix}^\\mathtt{T}$. The first two elements are position (down range distance) and velocity, so we can use `Q_discrete_white_noise` noise to compute the values for the upper left hand side of $\\mathbf Q$. The third element of $\\mathbf x$ is altitude, which we are assuming is independent of the down range distance. That leads us to a block design of $\\mathbf Q$ of:\n\n$$\\mathbf Q = \\begin{bmatrix}\\mathbf Q_\\mathtt{x} & 0 \\\\ 0 & \\mathbf Q_\\mathtt{y}\\end{bmatrix}$$",
"_____no_output_____"
],
[
"### Implementation\n\n`FilterPy` provides the class `ExtendedKalmanFilter`. It works similarly to the `KalmanFilter` class we have been using, except that it allows you to provide a function that computes the Jacobian of $\\mathbf H$ and the function $h(\\mathbf x)$. \n\nWe start by importing the filter and creating it. The dimension of `x` is 3 and `z` has dimension 1.\n\n```python\nfrom filterpy.kalman import ExtendedKalmanFilter\n\nrk = ExtendedKalmanFilter(dim_x=3, dim_z=1)\n```\nWe create the radar simulator:\n```python\nradar = RadarSim(dt, pos=0., vel=100., alt=1000.)\n```\nWe will initialize the filter near the airplane's actual position:\n\n```python\nrk.x = array([radar.pos, radar.vel-10, radar.alt+100])\n```\n\nWe assign the system matrix using the first term of the Taylor series expansion we computed above:\n\n```python\ndt = 0.05\nrk.F = eye(3) + array([[0, 1, 0],\n [0, 0, 0],\n [0, 0, 0]])*dt\n```\n\nAfter assigning reasonable values to $\\mathbf R$, $\\mathbf Q$, and $\\mathbf P$ we can run the filter with a simple loop. We pass the functions for computing the Jacobian of $\\mathbf H$ and $h(x)$ into the `update` method.\n\n```python\nfor i in range(int(20/dt)):\n z = radar.get_range()\n rk.update(array([z]), HJacobian_at, hx)\n rk.predict()\n```\n\nAdding some boilerplate code to save and plot the results we get:",
"_____no_output_____"
]
],
[
[
"from filterpy.common import Q_discrete_white_noise\nfrom filterpy.kalman import ExtendedKalmanFilter\nfrom numpy import eye, array, asarray\nimport numpy as np\n\ndt = 0.05\nrk = ExtendedKalmanFilter(dim_x=3, dim_z=1)\nradar = RadarSim(dt, pos=0., vel=100., alt=1000.)\n\n# make an imperfect starting guess\nrk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000])\n\nrk.F = eye(3) + array([[0, 1, 0],\n [0, 0, 0],\n [0, 0, 0]]) * dt\n\nrange_std = 5. # meters\nrk.R = np.diag([range_std**2])\nrk.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1)\nrk.Q[2,2] = 0.1\nrk.P *= 50\n\nxs, track = [], []\nfor i in range(int(20/dt)):\n z = radar.get_range()\n track.append((radar.pos, radar.vel, radar.alt))\n \n rk.update(array([z]), HJacobian_at, hx)\n xs.append(rk.x)\n rk.predict()\n\nxs = asarray(xs)\ntrack = asarray(track)\ntime = np.arange(0, len(xs)*dt, dt)\nekf_internal.plot_radar(xs, track, time)",
"_____no_output_____"
]
],
[
[
"## Using SymPy to compute Jacobians\n\nDepending on your experience with derivatives you may have found the computation of the Jacobian difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations.\n\nAs explained in Appendix A, we can use the SymPy package to compute the Jacobian for us.",
"_____no_output_____"
]
],
[
[
"import sympy\nfrom IPython.display import display\nsympy.init_printing(use_latex='mathjax')\n\nx, x_vel, y = sympy.symbols('x, x_vel y')\n\nH = sympy.Matrix([sympy.sqrt(x**2 + y**2)])\n\nstate = sympy.Matrix([x, x_vel, y])\nJ = H.jacobian(state)\n\ndisplay(state)\ndisplay(J)",
"_____no_output_____"
]
],
[
[
"This result is the same as the result we computed above, and with much less effort on our part!",
"_____no_output_____"
],
[
"## Robot Localization\n\nIt's time to try a real problem. I warn you that this section is difficult. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to solve a real world problem. \n\nWe will consider the problem of robot localization. We already implemented this in the **Unscented Kalman Filter** chapter, and I recommend you read it now if you haven't already. In this scenario we have a robot that is moving through a landscape using a sensor to detect landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. It might be one of those small robots that vacuum your house, or a robot in a warehouse.\n\nThe robot has 4 wheels in the same configuration used by automobiles. It maneuvers by pivoting the front wheels. This causes the robot to pivot around the rear axle while moving forward. This is nonlinear behavior which we will have to model. \n\nThe robot has a sensor that measures the range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry. \n\nBoth the process model and measurement models are nonlinear. The EKF accommodates both, so we provisionally conclude that the EKF is a viable choice for this problem.",
"_____no_output_____"
],
[
"### Robot Motion Model\n\nAt a first approximation an automobile steers by pivoting the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modeling steering requires a complicated set of differential equations. \n\nFor lower speed robotic applications a simpler *bicycle model* has been found to perform well. This is a depiction of the model:",
"_____no_output_____"
]
],
[
[
"ekf_internal.plot_bicycle()",
"_____no_output_____"
]
],
[
[
"In the **Unscented Kalman Filter** chapter we derived these equations:\n\n$$\\begin{aligned} \n\\beta &= \\frac d w \\tan(\\alpha) \\\\\nx &= x - R\\sin(\\theta) + R\\sin(\\theta + \\beta) \\\\\ny &= y + R\\cos(\\theta) - R\\cos(\\theta + \\beta) \\\\\n\\theta &= \\theta + \\beta\n\\end{aligned}\n$$\n\nwhere $\\theta$ is the robot's heading.\n\nYou do not need to understand this model in detail if you are not interested in steering models. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter.",
"_____no_output_____"
],
[
"### Design the State Variables\n\nFor our filter we will maintain the position $x,y$ and orientation $\\theta$ of the robot:\n\n$$\\mathbf x = \\begin{bmatrix}x \\\\ y \\\\ \\theta\\end{bmatrix}$$\n\nOur control input $\\mathbf u$ is the velocity $v$ and steering angle $\\alpha$:\n\n$$\\mathbf u = \\begin{bmatrix}v \\\\ \\alpha\\end{bmatrix}$$",
"_____no_output_____"
],
[
"### Design the System Model\n\nWe model our system as a nonlinear motion model plus noise.\n\n$$\\bar x = f(x, u) + \\mathcal{N}(0, Q)$$\n\n\n\nUsing the motion model for a robot that we created above, we can expand this to\n\n$$\\bar{\\begin{bmatrix}x\\\\y\\\\\\theta\\end{bmatrix}} = \\begin{bmatrix}x\\\\y\\\\\\theta\\end{bmatrix} + \n\\begin{bmatrix}- R\\sin(\\theta) + R\\sin(\\theta + \\beta) \\\\\nR\\cos(\\theta) - R\\cos(\\theta + \\beta) \\\\\n\\beta\\end{bmatrix}$$\n\nWe find The $\\mathbf F$ by taking the Jacobian of $f(x,u)$.\n\n$$\\mathbf F = \\frac{\\partial f(x, u)}{\\partial x} =\\begin{bmatrix}\n\\frac{\\partial f_1}{\\partial x} & \n\\frac{\\partial f_1}{\\partial y} &\n\\frac{\\partial f_1}{\\partial \\theta}\\\\\n\\frac{\\partial f_2}{\\partial x} & \n\\frac{\\partial f_2}{\\partial y} &\n\\frac{\\partial f_2}{\\partial \\theta} \\\\\n\\frac{\\partial f_3}{\\partial x} & \n\\frac{\\partial f_3}{\\partial y} &\n\\frac{\\partial f_3}{\\partial \\theta}\n\\end{bmatrix}\n$$\n\nWhen we calculate these we get\n\n$$\\mathbf F = \\begin{bmatrix}\n1 & 0 & -R\\cos(\\theta) + R\\cos(\\theta+\\beta) \\\\\n0 & 1 & -R\\sin(\\theta) + R\\sin(\\theta+\\beta) \\\\\n0 & 0 & 1\n\\end{bmatrix}$$\n\nWe can double check our work with SymPy.",
"_____no_output_____"
]
],
[
[
"import sympy\nfrom sympy.abc import alpha, x, y, v, w, R, theta\nfrom sympy import symbols, Matrix\nsympy.init_printing(use_latex=\"mathjax\", fontsize='16pt')\ntime = symbols('t')\nd = v*time\nbeta = (d/w)*sympy.tan(alpha)\nr = w/sympy.tan(alpha)\n\nfxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)],\n [y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)],\n [theta+beta]])\nF = fxu.jacobian(Matrix([x, y, theta]))\nF",
"_____no_output_____"
]
],
[
[
"That looks a bit complicated. We can use SymPy to substitute terms:",
"_____no_output_____"
]
],
[
[
"# reduce common expressions\nB, R = symbols('beta, R')\nF = F.subs((d/w)*sympy.tan(alpha), B)\nF.subs(w/sympy.tan(alpha), R)",
"_____no_output_____"
]
],
[
[
"This form verifies that the computation of the Jacobian is correct.\n\nNow we can turn our attention to the noise. Here, the noise is in our control input, so it is in *control space*. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \\theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system. \n\n$$\\mathbf{M} = \\begin{bmatrix}\\sigma_{vel}^2 & 0 \\\\ 0 & \\sigma_\\alpha^2\\end{bmatrix}$$\n\nIf this was a linear problem we would convert from control space to state space using the by now familiar $\\mathbf{FMF}^\\mathsf T$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\\mathbf{V}$. \n\n$$\\mathbf{V} = \\frac{\\partial f(x, u)}{\\partial u} \\begin{bmatrix}\n\\frac{\\partial f_1}{\\partial v} & \\frac{\\partial f_1}{\\partial \\alpha} \\\\\n\\frac{\\partial f_2}{\\partial v} & \\frac{\\partial f_2}{\\partial \\alpha} \\\\\n\\frac{\\partial f_3}{\\partial v} & \\frac{\\partial f_3}{\\partial \\alpha}\n\\end{bmatrix}$$\n\nThese partial derivatives become very difficult to work with. Let's compute them with SymPy. ",
"_____no_output_____"
]
],
[
[
"V = fxu.jacobian(Matrix([v, alpha]))\nV = V.subs(sympy.tan(alpha)/w, 1/R) \nV = V.subs(time*v/R, B)\nV = V.subs(time*v, 'd')\nV",
"_____no_output_____"
]
],
[
[
"This should give you an appreciation of how quickly the EKF become mathematically intractable. \n\nThis gives us the final form of our prediction equations:\n\n$$\\begin{aligned}\n\\mathbf{\\bar x} &= \\mathbf x + \n\\begin{bmatrix}- R\\sin(\\theta) + R\\sin(\\theta + \\beta) \\\\\nR\\cos(\\theta) - R\\cos(\\theta + \\beta) \\\\\n\\beta\\end{bmatrix}\\\\\n\\mathbf{\\bar P} &=\\mathbf{FPF}^{\\mathsf T} + \\mathbf{VMV}^{\\mathsf T}\n\\end{aligned}$$\n\nThis form of linearization is not the only way to predict $\\mathbf x$. For example, we could use a numerical integration technique such as *Runge Kutta* to compute the movement\nof the robot. This will be required if the time step is relatively large. Things are not as cut and dried with the EKF as for the Kalman filter. For a real problem you have to carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns.",
"_____no_output_____"
],
[
"### Design the Measurement Model\n\nThe robot's sensor provides a noisy bearing and range measurement to multiple known locations in the landscape. The measurement model must convert the state $\\begin{bmatrix}x & y&\\theta\\end{bmatrix}^\\mathsf T$ into a range and bearing to the landmark. If $\\mathbf p$ \nis the position of a landmark, the range $r$ is\n\n$$r = \\sqrt{(p_x - x)^2 + (p_y - y)^2}$$\n\nThe sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so:\n\n$$\\phi = \\arctan(\\frac{p_y - y}{p_x - x}) - \\theta$$\n\n\nThus our measurement model $h$ is\n\n\n$$\\begin{aligned}\n\\mathbf z& = h(\\bar{\\mathbf x}, \\mathbf p) &+ \\mathcal{N}(0, R)\\\\\n&= \\begin{bmatrix}\n\\sqrt{(p_x - x)^2 + (p_y - y)^2} \\\\\n\\arctan(\\frac{p_y - y}{p_x - x}) - \\theta \n\\end{bmatrix} &+ \\mathcal{N}(0, R)\n\\end{aligned}$$\n\nThis is clearly nonlinear, so we need linearize $h$ at $\\mathbf x$ by taking its Jacobian. We compute that with SymPy below.",
"_____no_output_____"
]
],
[
[
"px, py = symbols('p_x, p_y')\nz = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)],\n [sympy.atan2(py-y, px-x) - theta]])\nz.jacobian(Matrix([x, y, theta]))",
"_____no_output_____"
]
],
[
[
"Now we need to write that as a Python function. For example we might write:",
"_____no_output_____"
]
],
[
[
"from math import sqrt\n\ndef H_of(x, landmark_pos):\n \"\"\" compute Jacobian of H matrix where h(x) computes \n the range and bearing to a landmark for state x \"\"\"\n\n px = landmark_pos[0]\n py = landmark_pos[1]\n hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2\n dist = sqrt(hyp)\n\n H = array(\n [[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0],\n [ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]])\n return H",
"_____no_output_____"
]
],
[
[
"We also need to define a function that converts the system state into a measurement.",
"_____no_output_____"
]
],
[
[
"from math import atan2\n\ndef Hx(x, landmark_pos):\n \"\"\" takes a state variable and returns the measurement\n that would correspond to that state.\n \"\"\"\n px = landmark_pos[0]\n py = landmark_pos[1]\n dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2)\n\n Hx = array([[dist],\n [atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]])\n return Hx",
"_____no_output_____"
]
],
[
[
"### Design Measurement Noise\n\nIt is reasonable to assume that the noise of the range and bearing measurements are independent, hence\n\n$$\\mathbf R=\\begin{bmatrix}\\sigma_{range}^2 & 0 \\\\ 0 & \\sigma_{bearing}^2\\end{bmatrix}$$",
"_____no_output_____"
],
[
"### Implementation\n\nWe will use `FilterPy`'s `ExtendedKalmanFilter` class to implement the filter. Its `predict()` method uses the standard linear equations for the process model. Ours is nonlinear, so we will have to override `predict()` with our own implementation. I'll want to also use this class to simulate the robot, so I'll add a method `move()` that computes the position of the robot which both `predict()` and my simulation can call.\n\nThe matrices for the prediction step are quite large. While writing this code I made several errors before I finally got it working. I only found my errors by using SymPy's `evalf` function. `evalf` evaluates a SymPy `Matrix` with specific values for the variables. I decided to demonstrate this technique to you, and used `evalf` in the Kalman filter code. You'll need to understand a couple of points.\n\nFirst, `evalf` uses a dictionary to specify the values. For example, if your matrix contains an `x` and `y`, you can write\n\n```python\n M.evalf(subs={x:3, y:17})\n```\n \nto evaluate the matrix for `x=3` and `y=17`. \n\nSecond, `evalf` returns a `sympy.Matrix` object. Use `numpy.array(M).astype(float)` to convert it to a NumPy array. `numpy.array(M)` creates an array of type `object`, which is not what you want.\n\nHere is the code for the EKF:",
"_____no_output_____"
]
],
[
[
"from filterpy.kalman import ExtendedKalmanFilter as EKF\nfrom numpy import array, sqrt\nclass RobotEKF(EKF):\n def __init__(self, dt, wheelbase, std_vel, std_steer):\n EKF.__init__(self, 3, 2, 2)\n self.dt = dt\n self.wheelbase = wheelbase\n self.std_vel = std_vel\n self.std_steer = std_steer\n\n a, x, y, v, w, theta, time = symbols(\n 'a, x, y, v, w, theta, t')\n d = v*time\n beta = (d/w)*sympy.tan(a)\n r = w/sympy.tan(a)\n \n self.fxu = Matrix(\n [[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)],\n [y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)],\n [theta+beta]])\n\n self.F_j = self.fxu.jacobian(Matrix([x, y, theta]))\n self.V_j = self.fxu.jacobian(Matrix([v, a]))\n\n # save dictionary and it's variables for later use\n self.subs = {x: 0, y: 0, v:0, a:0, \n time:dt, w:wheelbase, theta:0}\n self.x_x, self.x_y, = x, y \n self.v, self.a, self.theta = v, a, theta\n\n def predict(self, u):\n self.x = self.move(self.x, u, self.dt)\n\n self.subs[self.theta] = self.x[2, 0]\n self.subs[self.v] = u[0]\n self.subs[self.a] = u[1]\n\n F = array(self.F_j.evalf(subs=self.subs)).astype(float)\n V = array(self.V_j.evalf(subs=self.subs)).astype(float)\n\n # covariance of motion noise in control space\n M = array([[self.std_vel*u[0]**2, 0], \n [0, self.std_steer**2]])\n\n self.P = np.dot(F, self.P).dot(F.T) + np.dot(V, M).dot(V.T)\n\n def move(self, x, u, dt):\n hdg = x[2, 0]\n vel = u[0]\n steering_angle = u[1]\n dist = vel * dt\n\n if abs(steering_angle) > 0.001: # is robot turning?\n beta = (dist / self.wheelbase) * tan(steering_angle)\n r = self.wheelbase / tan(steering_angle) # radius\n\n dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)], \n [r*cos(hdg) - r*cos(hdg + beta)], \n [beta]])\n else: # moving in straight line\n dx = np.array([[dist*cos(hdg)], \n [dist*sin(hdg)], \n [0]])\n return x + dx",
"_____no_output_____"
]
],
[
[
"Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\\circ$ and $h(x)$ has a bearing of $359^\\circ$. Naively subtracting them would yield a angular difference of $-358^\\circ$, whereas the correct value is $2^\\circ$. We have to write code to correctly compute the bearing residual.",
"_____no_output_____"
]
],
[
[
"def residual(a, b):\n \"\"\" compute residual (a-b) between measurements containing \n [range, bearing]. Bearing is normalized to [-pi, pi)\"\"\"\n y = a - b\n y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi)\n if y[1] > np.pi: # move to [-pi, pi)\n y[1] -= 2 * np.pi\n return y",
"_____no_output_____"
]
],
[
[
"The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable `landmarks` that contains the landmark coordinates. I update the simulated robot position 10 times a second, but run the EKF only once per second. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed.",
"_____no_output_____"
]
],
[
[
"from filterpy.stats import plot_covariance_ellipse\nfrom math import sqrt, tan, cos, sin, atan2\nimport matplotlib.pyplot as plt\n\ndt = 1.0\n\ndef z_landmark(lmark, sim_pos, std_rng, std_brg):\n x, y = sim_pos[0, 0], sim_pos[1, 0]\n d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2) \n a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0]\n z = np.array([[d + randn()*std_rng],\n [a + randn()*std_brg]])\n return z\n\ndef ekf_update(ekf, z, landmark):\n ekf.update(z, HJacobian=H_of, Hx=Hx, \n residual=residual,\n args=(landmark), hx_args=(landmark))\n \n \ndef run_localization(landmarks, std_vel, std_steer, \n std_range, std_bearing,\n step=10, ellipse_step=20, ylim=None):\n ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel, \n std_steer=std_steer)\n ekf.x = array([[2, 6, .3]]).T # x, y, steer angle\n ekf.P = np.diag([.1, .1, .1])\n ekf.R = np.diag([std_range**2, std_bearing**2])\n\n sim_pos = ekf.x.copy() # simulated position\n # steering command (vel, steering angle radians)\n u = array([1.1, .01]) \n\n plt.figure()\n plt.scatter(landmarks[:, 0], landmarks[:, 1],\n marker='s', s=60)\n \n track = []\n for i in range(200):\n sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot\n track.append(sim_pos)\n\n if i % step == 0:\n ekf.predict(u=u)\n\n if i % ellipse_step == 0:\n plot_covariance_ellipse(\n (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], \n std=6, facecolor='k', alpha=0.3)\n\n x, y = sim_pos[0, 0], sim_pos[1, 0]\n for lmark in landmarks:\n z = z_landmark(lmark, sim_pos,\n std_range, std_bearing)\n ekf_update(ekf, z, lmark)\n\n if i % ellipse_step == 0:\n plot_covariance_ellipse(\n (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],\n std=6, facecolor='g', alpha=0.8)\n track = np.array(track)\n plt.plot(track[:, 0], track[:,1], color='k', lw=2)\n plt.axis('equal')\n plt.title(\"EKF Robot localization\")\n if ylim is not None: plt.ylim(*ylim)\n plt.show()\n return ekf",
"_____no_output_____"
],
[
"landmarks = array([[5, 10], [10, 5], [15, 15]])\n\nekf = run_localization(\n landmarks, std_vel=0.1, std_steer=np.radians(1),\n std_range=0.3, std_bearing=0.1)\nprint('Final P:', ekf.P.diagonal())",
"_____no_output_____"
]
],
[
[
"I have plotted the landmarks as solid squares. The path of the robot is drawn with a black line. The covariance ellipses for the predict step are light gray, and the covariances of the update are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\\sigma$.\n\nWe can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We determine that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements and the errors improve.\n\nI used the same initial conditions and landmark locations in the UKF chapter. The UKF achieves much better accuracy in terms of the error ellipse. Both perform roughly as well as far as their estimate for $\\mathbf x$ is concerned. \n\nNow let's add another landmark.",
"_____no_output_____"
]
],
[
[
"landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]])\n\nekf = run_localization(\n landmarks, std_vel=0.1, std_steer=np.radians(1),\n std_range=0.3, std_bearing=0.1)\nplt.show()\nprint('Final P:', ekf.P.diagonal())",
"_____no_output_____"
]
],
[
[
"The uncertainly in the estimates near the end of the track are smaller. We can see the effect that multiple landmarks have on our uncertainty by only using the first two landmarks.",
"_____no_output_____"
]
],
[
[
"ekf = run_localization(\n landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10,\n std_range=1.4, std_bearing=.05)\nprint('Final P:', ekf.P.diagonal())",
"_____no_output_____"
]
],
[
[
"The estimate quickly diverges from the robot's path after passing the landmarks. The covariance also grows quickly. Let's see what happens with only one landmark:",
"_____no_output_____"
]
],
[
[
"ekf = run_localization(\n landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10,\n std_range=1.4, std_bearing=.05)\nprint('Final P:', ekf.P.diagonal())",
"_____no_output_____"
]
],
[
[
"As you probably suspected, one landmark produces a very bad result. Conversely, a large number of landmarks allows us to make very accurate estimates.",
"_____no_output_____"
]
],
[
[
"landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10], \n [10,14], [23, 14], [25, 20], [10, 20]])\n\nekf = run_localization(\n landmarks, std_vel=0.1, std_steer=np.radians(1),\n std_range=0.3, std_bearing=0.1, ylim=(0, 21))\nprint('Final P:', ekf.P.diagonal())",
"_____no_output_____"
]
],
[
[
"### Discussion\n\nI said that this was a real problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to simpler Jacobians. On the other hand, my model of the movement is also simplistic in several ways. First, it uses a bicycle model. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot responds instantaneously to the control input. Sebastian Thrun writes in *Probabilistic Robots* that this simplified model is justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the CPU time required to perform the linear algebra. \n\nAnother way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic. ",
"_____no_output_____"
],
[
"## UKF vs EKF\n\n\nIn the last chapter I used the UKF to solve this problem. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial despite a rudimentary motion model. A different problem could result in a Jacobian which is difficult or impossible to derive analytically. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model. \n\nThere are many cases where the Jacobian cannot be found analytically. The details are beyond the scope of this book, but you will have to use numerical methods to compute the Jacobian. That undertaking is not trivial, and you will spend a significant portion of a master's degree at a STEM school learning techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates. \n\nSo, UKFs are easy. Are they accurate? In practice they often perform better than the EKF. You can find plenty of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point, and the UKF uses $2n+1$ points.\n\nLet's look at a specific example. Take $f(x) = x^3$ and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I generate 50,000 points randomly distributed according to the Gaussian, pass each through $f(x)$, then compute the mean and variance of the result. \n\nThe EKF linearizes the function by taking the derivative to find the slope at the evaluation point $x$. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that.",
"_____no_output_____"
]
],
[
[
"import kf_book.nonlinear_plots as nonlinear_plots\nnonlinear_plots.plot_ekf_vs_mc()",
"_____no_output_____"
]
],
[
[
"The EKF computation is rather inaccurate. In contrast, here is the performance of the UKF:",
"_____no_output_____"
]
],
[
[
"nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.)",
"_____no_output_____"
]
],
[
[
"Here we can see that the computation of the UKF's mean is accurate to 2 decimal places. The standard deviation is slightly off, but you can also fine tune how the UKF computes the distribution by using the $\\alpha$, $\\beta$, and $\\gamma$ parameters for generating the sigma points. Here I used $\\alpha=0.001$, $\\beta=3$, and $\\gamma=1$. Feel free to modify them to see the result. You should be able to get better results than I did. However, avoid over-tuning the UKF for a specific test. It may perform better for your test case, but worse in general.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.