hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e7315d923b2dcc05446580b2b91d3aabcd4b2907 | 881,023 | ipynb | Jupyter Notebook | Notebooks/Reddit_relational_net_extraction.ipynb | linesn/reddit_analysis | 9b46c7fad825bcbfbdd13dd895e08ec8f9cb9d5e | [
"MIT"
] | null | null | null | Notebooks/Reddit_relational_net_extraction.ipynb | linesn/reddit_analysis | 9b46c7fad825bcbfbdd13dd895e08ec8f9cb9d5e | [
"MIT"
] | null | null | null | Notebooks/Reddit_relational_net_extraction.ipynb | linesn/reddit_analysis | 9b46c7fad825bcbfbdd13dd895e08ec8f9cb9d5e | [
"MIT"
] | null | null | null | 1,150.160574 | 852,247 | 0.953545 | [
[
[
"<a href=\"https://colab.research.google.com/github/linesn/reddit_analysis/blob/main/Notebooks/Reddit_relational_net_extraction.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Module 5 Homework - Extracting a relational network from Reddit data\nNick Lines",
"_____no_output_____"
],
[
"## Imports and environment set up",
"_____no_output_____"
]
],
[
[
"%pylab inline\nimport re\nimport os\nimport pandas as pd\nimport scipy.sparse as sp\nimport networkx as nx\nfrom itertools import chain",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"try:\n import datashader as ds\n import datashader.transfer_functions as tf\n from datashader.layout import random_layout, circular_layout, forceatlas2_layout\n from datashader.bundling import connect_edges, hammer_bundle\nexcept:\n %pip install datashader\n import datashader as ds\n import datashader.transfer_functions as tf\n from datashader.layout import random_layout, circular_layout, forceatlas2_layout\n from datashader.bundling import connect_edges, hammer_bundle",
"Collecting datashader\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/df/24/22f96084785d9cc424f1e70541a2803eec807c82e6bdab87c4b71fd96d10/datashader-0.12.1-py2.py3-none-any.whl (15.8MB)\n\u001b[K |████████████████████████████████| 15.8MB 319kB/s \n\u001b[?25hRequirement already satisfied: param>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from datashader) (1.10.1)\nRequirement already satisfied: pandas>=0.24.1 in /usr/local/lib/python3.7/dist-packages (from datashader) (1.1.5)\nRequirement already satisfied: pyct>=0.4.4 in /usr/local/lib/python3.7/dist-packages (from datashader) (0.4.8)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from datashader) (1.4.1)\nRequirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.7/dist-packages (from datashader) (1.19.5)\nRequirement already satisfied: toolz>=0.7.4 in /usr/local/lib/python3.7/dist-packages (from datashader) (0.11.1)\nRequirement already satisfied: numba!=0.49.*,!=0.50.*,>=0.37.0 in /usr/local/lib/python3.7/dist-packages (from datashader) (0.51.2)\nRequirement already satisfied: pillow>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from datashader) (7.1.2)\nRequirement already satisfied: xarray>=0.9.6 in /usr/local/lib/python3.7/dist-packages (from datashader) (0.15.1)\nRequirement already satisfied: dask[complete]>=0.18.0 in /usr/local/lib/python3.7/dist-packages (from datashader) (2.12.0)\nRequirement already satisfied: bokeh in /usr/local/lib/python3.7/dist-packages (from datashader) (2.3.1)\nRequirement already satisfied: colorcet>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from datashader) (2.0.6)\nCollecting datashape>=0.5.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/a6/5b/95b2ed56b61e649b69c9a5b1ecb32ff0a5cd68b9f69f5aa7774540e6b444/datashape-0.5.2.tar.gz (76kB)\n\u001b[K |████████████████████████████████| 81kB 9.6MB/s \n\u001b[?25hRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24.1->datashader) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24.1->datashader) (2018.9)\nRequirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba!=0.49.*,!=0.50.*,>=0.37.0->datashader) (0.34.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba!=0.49.*,!=0.50.*,>=0.37.0->datashader) (54.2.0)\nCollecting distributed>=2.0; extra == \"complete\"\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/0c/b0/3454dc44239c526f9c9e4cf04f62823776b71f927db74302986d56e7a9a1/distributed-2021.4.0-py3-none-any.whl (684kB)\n\u001b[K |████████████████████████████████| 686kB 39.5MB/s \n\u001b[?25hRequirement already satisfied: cloudpickle>=0.2.1; extra == \"complete\" in /usr/local/lib/python3.7/dist-packages (from dask[complete]>=0.18.0->datashader) (1.3.0)\nCollecting fsspec>=0.6.0; extra == \"complete\"\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/e9/91/2ef649137816850fa4f4c97c6f2eabb1a79bf0aa2c8ed198e387e373455e/fsspec-2021.4.0-py3-none-any.whl (108kB)\n\u001b[K |████████████████████████████████| 112kB 47.6MB/s \n\u001b[?25hRequirement already satisfied: PyYaml; extra == \"complete\" in /usr/local/lib/python3.7/dist-packages (from dask[complete]>=0.18.0->datashader) (3.13)\nCollecting partd>=0.3.10; extra == \"complete\"\n Downloading https://files.pythonhosted.org/packages/41/94/360258a68b55f47859d72b2d0b2b3cfe0ca4fbbcb81b78812bd00ae86b7c/partd-1.2.0-py3-none-any.whl\nRequirement already satisfied: tornado>=5.1 in /usr/local/lib/python3.7/dist-packages (from bokeh->datashader) (5.1.1)\nRequirement already satisfied: typing-extensions>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from bokeh->datashader) (3.7.4.3)\nRequirement already satisfied: packaging>=16.8 in /usr/local/lib/python3.7/dist-packages (from bokeh->datashader) (20.9)\nRequirement already satisfied: Jinja2>=2.7 in /usr/local/lib/python3.7/dist-packages (from bokeh->datashader) (2.11.3)\nCollecting multipledispatch>=0.4.7\n Downloading https://files.pythonhosted.org/packages/89/79/429ecef45fd5e4504f7474d4c3c3c4668c267be3370e4c2fd33e61506833/multipledispatch-0.6.0-py3-none-any.whl\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.24.1->datashader) (1.15.0)\nRequirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (1.0.2)\nRequirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (2.3.0)\nRequirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (5.4.8)\nRequirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (2.0.0)\nRequirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (1.7.0)\nRequirement already satisfied: click>=6.6 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (7.1.2)\nCollecting locket\n Downloading https://files.pythonhosted.org/packages/50/b8/e789e45b9b9c2db75e9d9e6ceb022c8d1d7e49b2c085ce8c05600f90a96b/locket-0.2.1-py2.py3-none-any.whl\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=16.8->bokeh->datashader) (2.4.7)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2>=2.7->bokeh->datashader) (1.1.1)\nRequirement already satisfied: heapdict in /usr/local/lib/python3.7/dist-packages (from zict>=0.1.3->distributed>=2.0; extra == \"complete\"->dask[complete]>=0.18.0->datashader) (1.0.1)\nBuilding wheels for collected packages: datashape\n Building wheel for datashape (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for datashape: filename=datashape-0.5.2-cp37-none-any.whl size=59430 sha256=45e004cae99efb9642daf4dba270b2fac0895b8efe2f4247f2d813ba387bcb9c\n Stored in directory: /root/.cache/pip/wheels/8d/06/05/c1cba3d57bdcfd3960e3f60a9fdc97e4baef2ef09af0ad1ef8\nSuccessfully built datashape\n\u001b[31mERROR: distributed 2021.4.0 has requirement cloudpickle>=1.5.0, but you'll have cloudpickle 1.3.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: distributed 2021.4.0 has requirement dask>=2021.03.0, but you'll have dask 2.12.0 which is incompatible.\u001b[0m\nInstalling collected packages: multipledispatch, datashape, datashader, distributed, fsspec, locket, partd\n Found existing installation: distributed 1.25.3\n Uninstalling distributed-1.25.3:\n Successfully uninstalled distributed-1.25.3\nSuccessfully installed datashader-0.12.1 datashape-0.5.2 distributed-2021.4.0 fsspec-2021.4.0 locket-0.2.1 multipledispatch-0.6.0 partd-1.2.0\n"
]
],
[
[
"This cell defines system-dependent configuration such as those different in Linux vs. Windows",
"_____no_output_____"
]
],
[
[
"if 'COLAB_GPU' in os.environ: # a hacky way of determining if you are in colab.\n print(\"Notebook is running in colab\")\n from google.colab import drive\n drive.mount(\"/content/drive\")\n DATA_DIR = \"./drive/My Drive/Data/\"\n \nelse:\n # Get the system information from the OS\n PLATFORM_SYSTEM = platform.system()\n\n # Darwin is macOS\n if PLATFORM_SYSTEM == \"Darwin\":\n EXECUTABLE_PATH = Path(\"../dependencies/chromedriver\")\n elif PLATFORM_SYSTEM == \"Windows\":\n EXECUTABLE_PATH = Path(\"../dependencies/chromedriver.exe\")\n else:\n logging.critical(\"Chromedriver not found or Chromedriver is outdated...\")\n exit()\n DATA_DIR = \"../Data/raw/\"",
"Notebook is running in colab\nMounted at /content/drive\n"
]
],
[
[
"## Data extraction",
"_____no_output_____"
]
],
[
[
"comment_df = pd.read_csv(DATA_DIR + \"raw/Reddit/REDDIT_COMMENTS_2021-02-03T17-02-00-0500.csv\")\npost_df = pd.read_csv(DATA_DIR + \"raw/Reddit/REDDIT_POSTS_2021-02-03T16-56-30-0500.csv\")",
"_____no_output_____"
],
[
"comment_posts = set(comment_df.post_id.unique())",
"_____no_output_____"
],
[
"posts = set(post_df.post_id.unique())",
"_____no_output_____"
],
[
"print(f\"There are {len(comment_posts.intersection(posts))} posts with comments on them in the data\")",
"There are 833 posts with comments on them in the data\n"
]
],
[
[
"We'll now make two networkks. The first is a post-author bipartite graph pairing each post to the list of agents. The second is a comment-author-post network that pairs agents with the posts they comment on. We'll need to make indices that track which agent and which post get assigned which integer index.",
"_____no_output_____"
]
],
[
[
"agents = array(list(set(comment_df.comment_author).union(set(post_df.post_author))))\nprint(f\"There are {agents.shape[0]} agents in the network.\")\nall_posts = array(list(posts.union(comment_posts)))\nprint(f\"There are {all_posts.shape[0]} posts in the network.\")",
"There are 6787 agents in the network.\nThere are 1095 posts in the network.\n"
],
[
"post_author_net = nx.from_edgelist(post_df[[\"post_id\", \"post_author\"]].values)",
"_____no_output_____"
],
[
"commenter_post_net = nx.from_edgelist(comment_df[[\"comment_author\", \"post_id\"]].values)",
"_____no_output_____"
],
[
"post_author_net.add_nodes_from(commenter_post_net)\ncommenter_post_net.add_nodes_from(post_author_net)\n#X = nx.adjacency_matrix(post_author_net)\n#Y = nx.adjacency_matrix(commenter_post_net)\nXprime = nx.Graph()\nXprime.add_nodes_from(sorted(post_author_net.nodes(data=True)))\nXprime.add_edges_from(post_author_net.edges(data=True))\nYprime = nx.Graph()\nYprime.add_nodes_from(sorted(commenter_post_net.nodes(data=True)))\nYprime.add_edges_from(commenter_post_net.edges(data=True))\nX = nx.adjacency_matrix(Xprime)\nY = nx.adjacency_matrix(Yprime)",
"_____no_output_____"
],
[
"len(list(post_author_net.nodes))",
"_____no_output_____"
],
[
"print(list(Xprime.nodes)==list(Yprime.nodes))",
"True\n"
]
],
[
[
"Because it is convenient to do so in `networkx`, we will include both the agents and the post id's in the nodes lists (even though that is less interpretable). We've created two matrices, `X` and `Y`, which respectively represent the agents that commented on each post, and the posts that were authored by each agent. Multiplying these to get `Z` we have now an adjacency matrix representing the connections between agents, starting at commenters and ending at post authors. This is what I am most interested in: seeing which agents are connected in Reddit political discussions.",
"_____no_output_____"
]
],
[
[
"X.shape",
"_____no_output_____"
],
[
"Y.shape",
"_____no_output_____"
],
[
"Z = X*Y",
"_____no_output_____"
],
[
"commenter_author_net = nx.from_scipy_sparse_matrix(Z)",
"_____no_output_____"
],
[
"commenter_author_net = nx.relabel.relabel_nodes(commenter_author_net, Xprime.nodes)",
"_____no_output_____"
],
[
"m = nx.adjacency_matrix(commenter_author_net)",
"_____no_output_____"
],
[
"mm = m.todense()",
"_____no_output_____"
],
[
"pd.DataFrame(mm).to_csv(DATA_DIR+\"/adjacency.csv\", header=None, index=None)",
"_____no_output_____"
],
[
"def hasNumbers(inputString):\n return bool(re.search(r'\\d', inputString))",
"_____no_output_____"
],
[
"numbers = [1 if hasNumbers(i) else 0 for i in Xprime.nodes()]\npure_text = [1 if i.isalpha() else 0 for i in Xprime.nodes()]",
"_____no_output_____"
],
[
"actors_df = pd.DataFrame({\"actors\":Xprime.nodes(), \"numbers\":numbers, \"pure_text\":pure_text})",
"_____no_output_____"
],
[
"actors_df.to_csv(DATA_DIR+\"/actors.csv\", index=None)",
"_____no_output_____"
],
[
"mm.shape",
"_____no_output_____"
]
],
[
[
"## Visualization\n\nTo visualize our networks we will use the `networkx` and `datashader` libraries, following the documentation available at [https://datashader.org/user_guide/Networks.html](https://datashader.org/user_guide/Networks.html). ",
"_____no_output_____"
]
],
[
[
"cvsopts = dict(plot_height=400, plot_width=400)\n\n\ndef nodesplot(nodes, name=None, canvas=None, cat=None):\n canvas = ds.Canvas(**cvsopts) if canvas is None else canvas\n aggregator=None if cat is None else ds.count_cat(cat)\n agg=canvas.points(nodes,'x','y',aggregator)\n return tf.spread(tf.shade(agg, cmap=[\"#FF3333\"]), px=3, name=name)\n\n\ndef edgesplot(edges, name=None, canvas=None):\n canvas = ds.Canvas(**cvsopts) if canvas is None else canvas\n return tf.shade(canvas.line(edges, 'x','y', agg=ds.count()), name=name)\n\n\ndef graphplot(nodes, edges, name=\"\", canvas=None, cat=None):\n if canvas is None:\n xr = nodes.x.min(), nodes.x.max()\n yr = nodes.y.min(), nodes.y.max()\n canvas = ds.Canvas(x_range=xr, y_range=yr, **cvsopts)\n \n np = nodesplot(nodes, name + \" nodes\", canvas, cat)\n ep = edgesplot(edges, name + \" edges\", canvas)\n return tf.stack(ep, np, how=\"over\", name=name)\n\n\ndef ng(graph,name):\n graph.name = name\n return graph\n\n\ndef nx_layout(graph):\n layout = nx.circular_layout(graph)\n #layout = nx.fruchterman_reingold_layout(graph)\n data = [[node]+layout[node].tolist() for node in graph.nodes]\n\n nodes = pd.DataFrame(data, columns=['id', 'x', 'y'])\n nodes.set_index('id', inplace=True)\n\n edges = pd.DataFrame(list(graph.edges), columns=['source', 'target'])\n return nodes, edges\n\n\ndef nx_plot(graph, name=\"\"):\n print(graph.name, len(graph.edges))\n nodes, edges = nx_layout(graph)\n \n direct = connect_edges(nodes, edges)\n bundled_bw005 = hammer_bundle(nodes, edges)\n bundled_bw030 = hammer_bundle(nodes, edges, initial_bandwidth=0.30)\n\n return [graphplot(nodes, direct, graph.name),\n graphplot(nodes, bundled_bw005, \"Bundled bw=0.05\"),\n graphplot(nodes, bundled_bw030, \"Bundled bw=0.30\")]",
"_____no_output_____"
],
[
"plots = [nx_plot(g) for g in\n [ng(post_author_net, name=\"Post Author Network\"), \n ng(commenter_post_net, name=\"Commenter Post Network\"),\n ng(commenter_author_net, name=\"Commenter Author Network\")\n ]]\n\ntf.Images(*chain.from_iterable(plots)).cols(3)",
"Post Author Network 1095\nCommenter Post Network 9001\nCommenter Author Network 11962\n"
]
],
[
[
"These plots are more meaningful for the Commenter Author Network than the other two, but they are still interesting. They show the agents and posts in a circular layout with connections drawn in blue between agents and posts. The leftmost plot in each row shows all direct connections. The remaining columns show the bundled results with two different bundling weights. This means that agents and posts that are similarly connected are bundled together to reduce the number of lines shown and to make community groupings more visible.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7316673b8f6abb8f5e4bbdffb7da8093de501d3 | 21,096 | ipynb | Jupyter Notebook | notebooks/02_numerical_pipeline_scaling.ipynb | glemaitre/scikit-learn-mooc | a2b9197235d139fed201f3e0397d7f086c441593 | [
"CC-BY-4.0"
] | null | null | null | notebooks/02_numerical_pipeline_scaling.ipynb | glemaitre/scikit-learn-mooc | a2b9197235d139fed201f3e0397d7f086c441593 | [
"CC-BY-4.0"
] | null | null | null | notebooks/02_numerical_pipeline_scaling.ipynb | glemaitre/scikit-learn-mooc | a2b9197235d139fed201f3e0397d7f086c441593 | [
"CC-BY-4.0"
] | null | null | null | 36 | 183 | 0.635523 | [
[
[
"# Preprocessing for numerical features\n\nIn this notebook, we will still use only numerical features.\n\nWe will introduce these new aspects:\n\n* an example of preprocessing, namely **scaling numerical variables**;\n* using a scikit-learn **pipeline** to chain preprocessing and model\n training;\n* assessing the statistical performance of our model via **cross-validation**\n instead of a single train-test split.\n\n## Data preparation\n\nFirst, let's load the full adult census dataset.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nadult_census = pd.read_csv(\"../datasets/adult-census.csv\")",
"_____no_output_____"
]
],
[
[
"We will now drop the target from the data we will use to train our\npredictive model.",
"_____no_output_____"
]
],
[
[
"target_name = \"class\"\ntarget = adult_census[target_name]\ndata = adult_census.drop(columns=target_name)",
"_____no_output_____"
]
],
[
[
"<div class=\"admonition caution alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Caution!</p>\n<p class=\"last\">Here and later, we use the name <tt class=\"docutils literal\">data</tt> and <tt class=\"docutils literal\">target</tt> to be explicit. In\nscikit-learn documentation, <tt class=\"docutils literal\">data</tt> is commonly named <tt class=\"docutils literal\">X</tt> and <tt class=\"docutils literal\">target</tt> is\ncommonly called <tt class=\"docutils literal\">y</tt>.</p>\n</div>",
"_____no_output_____"
],
[
"Then, we select only the numerical columns, as seen in the previous\nnotebook.",
"_____no_output_____"
]
],
[
[
"numerical_columns = [\n \"age\", \"capital-gain\", \"capital-loss\", \"hours-per-week\"]\n\ndata_numeric = data[numerical_columns]",
"_____no_output_____"
]
],
[
[
"Finally, we can divide our dataset into a train and test sets.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\ndata_train, data_test, target_train, target_test = train_test_split(\n data_numeric, target, random_state=42)\n\n# ## Model fitting with preprocessing\n#\n# A range of preprocessing algorithms in scikit-learn allow us to transform\n# the input data before training a model. In our case, we will standardize the\n# data and then train a new logistic regression model on that new version of\n# the dataset.\n#\n# Let's start by printing some statistics about the training data.",
"_____no_output_____"
],
[
"data_train.describe()",
"_____no_output_____"
]
],
[
[
"We see that the dataset's features span across different ranges. Some\nalgorithms make some assumptions regarding the feature distributions and\nusually normalizing features will be helpful to address these assumptions.\n\n<div class=\"admonition tip alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Tip</p>\n<p>Here are some reasons for scaling features:</p>\n<ul class=\"last simple\">\n<li>Models that rely on the distance between a pair of samples, for instance\nk-nearest neighbors, should be trained on normalized features to make each\nfeature contribute approximately equally to the distance computations.</li>\n<li>Many models such as logistic regression use a numerical solver (based on\ngradient descent) to find their optimal parameters. This solver converges\nfaster when the features are scaled.</li>\n</ul>\n</div>\n\nWhether or not a machine learning model requires scaling the features depends\non the model family. Linear models such as logistic regression generally\nbenefit from scaling the features while other models such as decision trees\ndo not need such preprocessing (but will not suffer from it).\n\nWe show how to apply such normalization using a scikit-learn transformer\ncalled `StandardScaler`. This transformer shifts and scales each feature\nindividually so that they all have a 0-mean and a unit standard deviation.\n\nWe will investigate different steps used in scikit-learn to achieve such a\ntransformation of the data.\n\nFirst, one needs to call the method `fit` in order to learn the scaling from\nthe data.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nscaler.fit(data_train)",
"_____no_output_____"
]
],
[
[
"The `fit` method for transformers is similar to the `fit` method for\npredictors. The main difference is that the former has a single argument (the\ndata matrix), whereas the latter has two arguments (the data matrix and the\ntarget).\n\n\n\nIn this case, the algorithm needs to compute the mean and standard deviation\nfor each feature and store them into some NumPy arrays. Here, these\nstatistics are the model states.\n\n<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">The fact that the model states of this scaler are arrays of means and\nstandard deviations is specific to the <tt class=\"docutils literal\">StandardScaler</tt>. Other\nscikit-learn transformers will compute different statistics and store them\nas model states, in the same fashion.</p>\n</div>\n\nWe can inspect the computed means and standard deviations.",
"_____no_output_____"
]
],
[
[
"scaler.mean_",
"_____no_output_____"
],
[
"scaler.scale_",
"_____no_output_____"
]
],
[
[
"<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">scikit-learn convention: if an attribute is learned from the data, its name\nends with an underscore (i.e. <tt class=\"docutils literal\">_</tt>), as in <tt class=\"docutils literal\">mean_</tt> and <tt class=\"docutils literal\">scale_</tt> for the\n<tt class=\"docutils literal\">StandardScaler</tt>.</p>\n</div>",
"_____no_output_____"
],
[
"Scaling the data is applied to each feature individually (i.e. each column in\nthe data matrix). For each feature, we subtract its mean and divide by its\nstandard deviation.\n\nOnce we have called the `fit` method, we can perform data transformation by\ncalling the method `transform`.",
"_____no_output_____"
]
],
[
[
"data_train_scaled = scaler.transform(data_train)\ndata_train_scaled",
"_____no_output_____"
]
],
[
[
"Let's illustrate the internal mechanism of the `transform` method and put it\nto perspective with what we already saw with predictors.\n\n\n\nThe `transform` method for transformers is similar to the `predict` method\nfor predictors. It uses a predefined function, called a **transformation\nfunction**, and uses the model states and the input data. However, instead of\noutputting predictions, the job of the `transform` method is to output a\ntransformed version of the input data.",
"_____no_output_____"
],
[
"Finally, the method `fit_transform` is a shorthand method to call\nsuccessively `fit` and then `transform`.\n\n",
"_____no_output_____"
]
],
[
[
"data_train_scaled = scaler.fit_transform(data_train)\ndata_train_scaled",
"_____no_output_____"
],
[
"data_train_scaled = pd.DataFrame(data_train_scaled,\n columns=data_train.columns)\ndata_train_scaled.describe()",
"_____no_output_____"
]
],
[
[
"We can easily combine these sequential operations with a scikit-learn\n`Pipeline`, which chains together operations and is used as any other\nclassifier or regressor. The helper function `make_pipeline` will create a\n`Pipeline`: it takes as arguments the successive transformations to perform,\nfollowed by the classifier or regressor model, and will assign automatically\na name at steps based on the name of the classes.",
"_____no_output_____"
]
],
[
[
"import time\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\n\nmodel = make_pipeline(StandardScaler(), LogisticRegression())",
"_____no_output_____"
]
],
[
[
"This predictive pipeline exposes the same methods as the final predictor:\n`fit` and `predict` (and additionally `predict_proba`, `decision_function`,\nor `score`).",
"_____no_output_____"
]
],
[
[
"start = time.time()\nmodel.fit(data_train, target_train)\nelapsed_time = time.time() - start",
"_____no_output_____"
]
],
[
[
"We can represent the internal mechanism of a pipeline when calling `fit`\nby the following diagram:\n\n\n\nWhen calling `model.fit`, the method `fit_transform` from each underlying\ntransformer in the pipeline will be called to: (i) learn their internal\nmodel states and (ii) transform the training data. Finally, the preprocessed\ndata are provided to train the predictor.\n\nTo predict the targets given a test set, one uses the `predict` method.",
"_____no_output_____"
]
],
[
[
"predicted_target = model.predict(data_test)\npredicted_target[:5]",
"_____no_output_____"
]
],
[
[
"Let's show the underlying mechanism:\n\n\n\nThe method `transform` of each transformer is called to preprocess the data.\nNote that there is no need to call the `fit` method for these transformers\nbecause we are using the internal model states computed when calling\n`model.fit`. The preprocessed data is then provided to the predictor that\nwill output the predicted target by calling its method `predict`.\n\nAs a shorthand, we can check the score of the full predictive pipeline\ncalling the method `model.score`. Thus, let's check the computational and\nstatistical performance of such a predictive pipeline.",
"_____no_output_____"
]
],
[
[
"model_name = model.__class__.__name__\nscore = model.score(data_test, target_test)\nprint(f\"The accuracy using a {model_name} is {score:.3f} \"\n f\"with a fitting time of {elapsed_time:.3f} seconds \"\n f\"in {model[-1].n_iter_[0]} iterations\")",
"_____no_output_____"
]
],
[
[
"We could compare this predictive model with the predictive model used in\nthe previous notebook which did not scale features.",
"_____no_output_____"
]
],
[
[
"model = LogisticRegression()\nstart = time.time()\nmodel.fit(data_train, target_train)\nelapsed_time = time.time() - start",
"_____no_output_____"
],
[
"model_name = model.__class__.__name__\nscore = model.score(data_test, target_test)\nprint(f\"The accuracy using a {model_name} is {score:.3f} \"\n f\"with a fitting time of {elapsed_time:.3f} seconds \"\n f\"in {model.n_iter_[0]} iterations\")",
"_____no_output_____"
]
],
[
[
"We see that scaling the data before training the logistic regression was\nbeneficial in terms of computational performance. Indeed, the number of\niterations decreased as well as the training time. The statistical\nperformance did not change since both models converged.\n\n<div class=\"admonition warning alert alert-danger\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Warning</p>\n<p class=\"last\">Working with non-scaled data will potentially force the algorithm to iterate\nmore as we showed in the example above. There is also a catastrophic scenario\nwhere the number of required iterations are more than the maximum number of\niterations allowed by the predictor (controlled by the <tt class=\"docutils literal\">max_iter</tt>) parameter.\nTherefore, before increasing <tt class=\"docutils literal\">max_iter</tt>, make sure that the data are well\nscaled.</p>\n</div>",
"_____no_output_____"
],
[
"## Model evaluation using cross-validation\n\nIn the previous example, we split the original data into a training set and a\ntesting set. This strategy has several issues: in a setting where the\namount of data is small, the subset used to train or test will be small.\nMoreover, if the splitting was done in a random manner, we do not have\ninformation regarding the confidence of the results obtained.\n\nInstead, we can use cross-validation. Cross-validation consists of repeating\nthis random splitting into training and testing sets and aggregating the\nmodel statistical performance. By repeating the experiment, one can get an\nestimate of the variability of the model statistical performance.\n\nThe next figure shows how the dataset is partitioned into train and test\nsamples at each iteration the cross-validation procedure.\n\n\n\n<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">This figure shows a particular cross-validation strategy named K-fold. There\nare a variety of different cross-validation strategies. Some of these aspects\nwill be covered in more details in future notebooks.</p>\n</div>\n\nFor each cross-validation split, the procedure trains a model on all the red\nsamples and evaluate the score of the model on the blue samples.\nCross-validation is therefore computationally intensive because it requires\ntraining several models instead of one.\n\nIn scikit-learn, the function `cross_validate` allows to do cross-validation\nand you need to pass it the model, the data, and the target. Since there\nexists several cross-validation strategies, `cross_validate` takes a\nparameter `cv` which defines the splitting strategy.",
"_____no_output_____"
]
],
[
[
"%%time\nfrom sklearn.model_selection import cross_validate\n\nmodel = make_pipeline(StandardScaler(), LogisticRegression())\ncv_result = cross_validate(model, data_numeric, target, cv=5)\ncv_result",
"_____no_output_____"
]
],
[
[
"The output of `cross_validate` is a Python dictionary, which by default\ncontains three entries: (i) the time to train the model on the training data\nfor each fold, (ii) the time to predict with the model on the testing data\nfor each fold, and (iii) the default score on the testing data for each fold.\n\nSetting `cv=5` created 5 distinct splits to get 5 variations for the training\nand testing sets. Each training set is used to fit one model which is then\nscored on the matching test set. This strategy is called K-fold\ncross-validation where `K` corresponds to the number of splits.\n\nNote that by default the `cross_validate` function discards the 5 models that\nwere trained on the different overlapping subset of the dataset. The goal of\ncross-validation is not to train a model, but rather to estimate\napproximately the generalization performance of a model that would have been\ntrained to the full training set, along with an estimate of the variability\n(uncertainty on the generalization accuracy).\n\nYou can pass additional parameters to `cross_validate` to get more\ninformation, for instance training scores. These features will be covered in\na future notebook.\n\nLet's extract the test scores from the `cv_result` dictionary and compute\nthe mean accuracy and the variation of the accuracy across folds.",
"_____no_output_____"
]
],
[
[
"scores = cv_result[\"test_score\"]\nprint(\"The mean cross-validation accuracy is: \"\n f\"{scores.mean():.3f} +/- {scores.std():.3f}\")",
"_____no_output_____"
]
],
[
[
"Note that by computing the standard-deviation of the cross-validation scores,\nwe can estimate the uncertainty of our model statistical performance. This is\nthe main advantage of cross-validation and can be crucial in practice, for\nexample when comparing different models to figure out whether one is better\nthan the other or whether the statistical performance differences are within\nthe uncertainty.\n\nIn this particular case, only the first 2 decimals seem to be trustworthy. If\nyou go up in this notebook, you can check that the performance we get\nwith cross-validation is compatible with the one from a single train-test\nsplit.",
"_____no_output_____"
],
[
"In this notebook we have:\n\n* seen the importance of **scaling numerical variables**;\n* used a **pipeline** to chain scaling and logistic regression training;\n* assessed the statistical performance of our model via **cross-validation**.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7316d1e0531e1207031009565b3ab0126270461 | 23,298 | ipynb | Jupyter Notebook | 101notebook/ipython-minibook/chapter5/505-cython-numpy.ipynb | OpenBookProjects/ipynb | 72a28109e8e30aea0b9c6713e78821e4affa2e33 | [
"MIT"
] | 6 | 2015-06-08T12:50:14.000Z | 2018-11-20T10:05:01.000Z | 101notebook/ipython-minibook/chapter5/505-cython-numpy.ipynb | OpenBookProjects/ipynb | 72a28109e8e30aea0b9c6713e78821e4affa2e33 | [
"MIT"
] | 15 | 2021-09-12T15:06:13.000Z | 2022-03-31T19:02:08.000Z | 101notebook/ipython-minibook/chapter5/505-cython-numpy.ipynb | OpenBookProjects/ipynb | 72a28109e8e30aea0b9c6713e78821e4affa2e33 | [
"MIT"
] | 8 | 2016-01-26T14:12:50.000Z | 2021-02-20T14:24:09.000Z | 123.925532 | 18,884 | 0.858657 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e731877dd85f8d0ce03794b149545b5b4d305bb7 | 14,623 | ipynb | Jupyter Notebook | Hugging Face/Hugging_Face_Ask_boolean_question_to_T5.ipynb | Charles-de-Montigny/awesome-notebooks | 79485142ba557e9c20e6f6dca4fdc12a3443813e | [
"BSD-3-Clause"
] | 1 | 2022-01-20T22:04:48.000Z | 2022-01-20T22:04:48.000Z | Hugging Face/Hugging_Face_Ask_boolean_question_to_T5.ipynb | mmcfer/awesome-notebooks | 8d2892e40db480a323049e04decfefac45904af4 | [
"BSD-3-Clause"
] | 18 | 2021-10-02T02:49:32.000Z | 2021-12-27T21:39:14.000Z | Hugging Face/Hugging_Face_Ask_boolean_question_to_T5.ipynb | mmcfer/awesome-notebooks | 8d2892e40db480a323049e04decfefac45904af4 | [
"BSD-3-Clause"
] | null | null | null | 34.56974 | 1,019 | 0.587909 | [
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# Hugging Face - Ask boolean question to T5\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Hugging%20Face/Hugging_Face_Ask_boolean_question_to_T5.ipynb\" target=\"_parent\"><img src=\"https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg==\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #huggingface",
"_____no_output_____"
],
[
"## T5-base finetuned on BoolQ (superglue task)\nThis notebook is for demonstrating the training and use of the text-to-text-transfer-transformer (better known as T5) on boolean questions (BoolQ). The example use case is a validator indicating if an idea is environmentally friendly. Nearly any question can be passed into the `query` function (see below) as long as a context to a question is given.\n\nAuthor: Maximilian Frank ([script4all.com](//script4all.com)) - Copyleft license\n\nNotes:\n- The model from [huggingface.co/mrm8488/t5-base-finetuned-boolq](//huggingface.co/mrm8488/t5-base-finetuned-boolq) is used in this example as it is an already trained t5-base model on boolean questions (BoolQ task of superglue).\n- Documentation references on [huggingface.co/transformers/model_doc/t5.html#training](//huggingface.co/transformers/model_doc/t5.html#training), template script on [programming-review.com/machine-learning/t5](//programming-review.com/machine-learning/t5)\n- The greater the model, the higher the accuracy on BoolQ (see [arxiv.org/pdf/1910.10683.pdf](//arxiv.org/pdf/1910.10683.pdf)):\n t5-small|t5-base|t5-large|t5-3B|t5-11B\n -|-|-|-|-\n 76.4%|81.4%|85.4%|89.9%|91.2%",
"_____no_output_____"
],
[
"## Loading the model\nIf here comes an error, install the packages via `python3 -m pip install … --user`.\n\nYou can also load a T5 plain model (not finetuned). Just replace the following code\n```python\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\ntokenizer = AutoTokenizer.from_pretrained('mrm8488/t5-base-finetuned-boolq')\nmodel = AutoModelForSeq2SeqLM.from_pretrained('mrm8488/t5-base-finetuned-boolq')…\n```\nwith\n```python\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\ntokenizer = T5Tokenizer.from_pretrained('t5-small')\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small')\n```\nwhere `t5-small` is one of the names in the table above.",
"_____no_output_____"
],
[
"**Tags:** #huggingface",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Install packages",
"_____no_output_____"
]
],
[
[
"!pip install transformers\n!pip install sentencepiece",
"_____no_output_____"
]
],
[
[
"### Import libraries",
"_____no_output_____"
]
],
[
[
"import json\nimport torch\nfrom operator import itemgetter\nfrom distutils.util import strtobool\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM",
"_____no_output_____"
]
],
[
[
"### Load model",
"_____no_output_____"
]
],
[
[
"tokenizer = AutoTokenizer.from_pretrained('mrm8488/t5-base-finetuned-boolq')\nmodel = AutoModelForSeq2SeqLM.from_pretrained('mrm8488/t5-base-finetuned-boolq').to(torch.device('cuda' if torch.cuda.is_available() else 'cpu'))\ntry:model.parallelize()\nexcept:pass",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Training\n> **Optional:** You can leave the following out, if you don't have custom datasets. By default the number of training epochs equals 0, so nothing is trained.\n\n> **Warning:** This option consumes a lot of runtime and thus *naas.ai* credits. Make sure to have enough credits on your account.\n\nFor each dataset a stream-opener has to be provided which is readable line by line (e.g. file, database). In the array with key `keys` are all dictionary keys which exist in the jsonl-line. So in this example the first training dataset has the keys `question` for the questions (string),`passage` for the contexts (string) and `answer` for the answers (boolean). Adjust these keys to your dataset.\n\nAt last you have to adjust the number of epochs to be trained (see comment `# epochs`).",
"_____no_output_____"
]
],
[
[
"srcs = [\n { 'stream': lambda:open('boolq/train.jsonl', 'r'),\n 'keys': ['question', 'passage', 'answer'] },\n { 'stream': lambda:open('boolq/dev.jsonl', 'r'),\n 'keys': ['question', 'passage', 'answer'] },\n { 'stream': lambda:open('boolq-nat-perturb/train.jsonl', 'r'),\n 'keys': ['question', 'passage', 'roberta_hard'] }\n]\nmodel.train()\nfor _ in range(0): # epochs\n for src in srcs:\n with src['stream']() as s:\n for d in s:\n q, p, a = itemgetter(src['keys'][0], src['keys'][1], src['keys'][2])(json.loads(d))\n tokens = tokenizer('question:'+q+'\\ncontext:'+p, return_tensors='pt')\n if len(tokens.input_ids[0]) > model.config.n_positions:\n continue\n model(input_ids=tokens.input_ids,\n labels=tokenizer(str(a), return_tensors='pt').input_ids,\n attention_mask=tokens.attention_mask,\n use_cache=True\n ).loss.backward()\nmodel.eval(); # ; suppresses long output on jupyter",
"_____no_output_____"
]
],
[
[
"### Define query function\nAs the model is ready, define the querying function.",
"_____no_output_____"
]
],
[
[
"def query(q='question', c='context'):\n return strtobool(\n tokenizer.decode(\n token_ids=model.generate(\n input_ids=tokenizer.encode('question:'+q+'\\ncontext:'+c, return_tensors='pt')\n )[0],\n skip_special_tokens=True,\n max_length=3)\n )",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Querying on the task\nNow the actual task begins: Query the model with your ideas (see list `ideas`).",
"_____no_output_____"
]
],
[
[
"if __name__ == '__main__':\n ideas = [ 'The idea is to pollute the air instead of riding the bike.', # should be false\n 'The idea is to go cycling instead of driving the car.', # should be true\n 'The idea is to put your trash everywhere.', # should be false\n 'The idea is to reduce transport distances.', # should be true\n 'The idea is to put plants on all the roofs.', # should be true\n 'The idea is to forbid opensource vaccines.', # should be true\n 'The idea is to go buy an Iphone every five years.', # should be false \n 'The idea is to walk once every week in the nature.', # should be true \n 'The idea is to go buy Green bonds.', # should be true \n 'The idea is to go buy fast fashion.', # should be false\n 'The idea is to buy single-use items.', # should be false\n 'The idea is to drink plastic bottled water.', # should be false\n 'The idea is to use import goods.', # should be false\n 'The idea is to use buy more food than you need.', # should be false\n 'The idea is to eat a lot of meat.', # should be false\n 'The idea is to eat less meat.', # should be false\n 'The idea is to always travel by plane.', # should be false\n 'The idea is to opensource vaccines.' # should be false\n \n ]\n for idea in ideas:\n print('🌏 Idea:', idea)\n print('\\t✅ Good idea' if query('Is the idea environmentally friendly?', idea) else '\\t❌ Bad idea' )",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e7319cf9830a98b8e3843a986056853538ff0200 | 631,827 | ipynb | Jupyter Notebook | _development/tutorials/visualization/06-grid-plots.ipynb | dsfm-org/code-bank | 01a6a542c2801856b52f222a52b1d6e9215dc00d | [
"MIT"
] | 10 | 2020-07-03T06:21:39.000Z | 2021-12-12T11:33:13.000Z | _development/tutorials/visualization/06-grid-plots.ipynb | dsfm-org/code-bank | 01a6a542c2801856b52f222a52b1d6e9215dc00d | [
"MIT"
] | 5 | 2020-08-17T09:37:44.000Z | 2021-08-25T16:10:04.000Z | _development/tutorials/visualization/06-grid-plots.ipynb | dsfm-org/code-bank | 01a6a542c2801856b52f222a52b1d6e9215dc00d | [
"MIT"
] | 4 | 2020-08-06T10:30:13.000Z | 2021-09-10T13:35:00.000Z | 964.621374 | 211,636 | 0.954779 | [
[
[
"# Grid Plots",
"_____no_output_____"
],
[
"Source: [https://github.com/d-insight/code-bank.git](https://github.com/d-insight/code-bank.git) \nLicense: [MIT License](https://opensource.org/licenses/MIT). See open source [license](LICENSE) in the Code Bank repository. ",
"_____no_output_____"
],
[
"-------------",
"_____no_output_____"
],
[
"## Introduction\n\nGrids are general types of plots that allow you to map plot types to rows and columns of a grid, this helps you create similar plots separated by features.",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"iris = sns.load_dataset('iris')",
"_____no_output_____"
],
[
"iris.head() ",
"_____no_output_____"
],
[
"iris.species.unique()",
"_____no_output_____"
]
],
[
[
"A dataset of 150 different flowers, each from three different species of iris (*Iris setosa*, *Iris versicolor*, and *Iris virginica*).\n\n",
"_____no_output_____"
],
[
"## PairGrid\n\nPairgrid is a subplot grid for plotting pairwise relationships in a dataset. \n\nCustomize sns.pairplot()",
"_____no_output_____"
]
],
[
[
"# Just the Grid\nsns.PairGrid(iris) # Gives us empty an PairGrid (create subplots)",
"_____no_output_____"
],
[
"# Then you map to the grid\ng = sns.PairGrid(iris) # Better to assign it to a variable (g)\ng.map(plt.scatter)",
"_____no_output_____"
],
[
"# Map to upper,lower, and diagonal\ng = sns.PairGrid(iris)\ng.map_diag(plt.hist) # Can map on specif areas of the grid\ng.map_upper(plt.scatter)\ng.map_lower(sns.kdeplot)",
"_____no_output_____"
]
],
[
[
"## pairplot\n\npairplot is a simpler version of PairGrid (you'll use quite often). More standard.",
"_____no_output_____"
]
],
[
[
"sns.pairplot(iris)",
"_____no_output_____"
],
[
"sns.pairplot(iris,hue='species',palette='rainbow')",
"_____no_output_____"
]
],
[
[
"## Facet Grid\n\nFacetGrid is the general way to create grids of plots based off of a feature:",
"_____no_output_____"
]
],
[
[
"# We use the tips dataset\ntips = sns.load_dataset('tips')",
"_____no_output_____"
],
[
"tips.head()",
"_____no_output_____"
],
[
"# This is Just the Grid\ng = sns.FacetGrid(tips, col=\"time\", row=\"smoker\")",
"_____no_output_____"
],
[
"g = sns.FacetGrid(tips, col=\"time\", row=\"smoker\")\ng = g.map(plt.hist, \"total_bill\") # try with sns.distplot",
"_____no_output_____"
],
[
"g = sns.FacetGrid(tips, col=\"time\", row=\"smoker\",hue='sex')\n# Notice hwo the arguments come after plt.scatter call\ng = g.map(plt.scatter, \"total_bill\", \"tip\").add_legend() # plt.scatter needs two arguments",
"_____no_output_____"
]
],
[
[
"## JointGrid\n\nJointGrid is the general version for jointplot() type grids, for a quick example:",
"_____no_output_____"
]
],
[
[
"g = sns.JointGrid(x=\"total_bill\", y=\"tip\", data=tips)",
"_____no_output_____"
],
[
"g = sns.JointGrid(x=\"total_bill\", y=\"tip\", data=tips)\ng = g.plot(sns.regplot, sns.distplot)",
"_____no_output_____"
]
],
[
[
"To customize your visualizations even more!\nReference the documentation as necessary for grid types, but most of the time you'll just use the easier plots discussed earlier.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e731a0cc8fcb6c811d1067acd6015b9fce1dcf94 | 9,091 | ipynb | Jupyter Notebook | demo/.ipynb_checkpoints/dataset-checkpoint.ipynb | SaadManzur/PoseUtils | 146861eedf6b704118fd38ee6ae996b8781a8741 | [
"MIT"
] | null | null | null | demo/.ipynb_checkpoints/dataset-checkpoint.ipynb | SaadManzur/PoseUtils | 146861eedf6b704118fd38ee6ae996b8781a8741 | [
"MIT"
] | null | null | null | demo/.ipynb_checkpoints/dataset-checkpoint.ipynb | SaadManzur/PoseUtils | 146861eedf6b704118fd38ee6ae996b8781a8741 | [
"MIT"
] | null | null | null | 94.697917 | 2,296 | 0.693433 | [
[
[
"from poseutils.datasets.processed import TransformedDataset\nfrom poseutils.datasets.unprocessed import GPADataset\nfrom poseutils.datasets.transformation import RootCenter\nfrom poseutils.datasets.transformation import CropAndScale\nfrom poseutils.datasets.transformation import CalculateMetrics\nfrom poseutils.datasets.transformation import Normalize",
"_____no_output_____"
],
[
"dataset = GPADataset(\"/home/smanzur/Personal/Research/Dataset/CrossDataset/gpa_xyz.npz\")\ntransformed = TransformedDataset(dataset)\n\ntransformations2d = [\n CropAndScale(),\n RootCenter(),\n CalculateMetrics(),\n Normalize()\n]\n\ntransformations3d = [\n RootCenter(),\n CalculateMetrics(),\n Normalize()\n]\n\ntransformed.apply2d(transformations)\ntransformed.apply3d(transformations)",
"[PoseUtils] Loaded raw data\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e731aff49910d1a886def28cfe6e3334225fc0cf | 8,183 | ipynb | Jupyter Notebook | Day05.ipynb | oddrationale/AdventOfCode2015FSharp | 88fbe4edd47117f793e6d583d56c5255b7dfa988 | [
"MIT"
] | null | null | null | Day05.ipynb | oddrationale/AdventOfCode2015FSharp | 88fbe4edd47117f793e6d583d56c5255b7dfa988 | [
"MIT"
] | null | null | null | Day05.ipynb | oddrationale/AdventOfCode2015FSharp | 88fbe4edd47117f793e6d583d56c5255b7dfa988 | [
"MIT"
] | null | null | null | 28.611888 | 271 | 0.548943 | [
[
[
"<h2>--- Day 5: Doesn't He Have Intern-Elves For This? ---</h2>",
"_____no_output_____"
],
[
"[](https://mybinder.org/v2/gh/oddrationale/AdventOfCode2015FSharp/master?urlpath=lab%2Ftree%2FDay05.ipynb)",
"_____no_output_____"
],
[
"<p>Santa needs help figuring out which strings in his text file are naughty or nice.</p>\n<p>A <em>nice string</em> is one with all of the following properties:</p>\n<ul>\n<li>It contains at least three vowels (<code>aeiou</code> only), like <code>aei</code>, <code>xazegov</code>, or <code title=\"John Madden John Madden John Madden\">aeiouaeiouaeiou</code>.</li>\n<li>It contains at least one letter that appears twice in a row, like <code>xx</code>, <code>abcdde</code> (<code>dd</code>), or <code>aabbccdd</code> (<code>aa</code>, <code>bb</code>, <code>cc</code>, or <code>dd</code>).</li>\n<li>It does <em>not</em> contain the strings <code>ab</code>, <code>cd</code>, <code>pq</code>, or <code>xy</code>, even if they are part of one of the other requirements.</li>\n</ul>\n<p>For example:</p>\n<ul>\n<li><code>ugknbfddgicrmopn</code> is nice because it has at least three vowels (<code>u...i...o...</code>), a double letter (<code>...dd...</code>), and none of the disallowed substrings.</li>\n<li><code>aaa</code> is nice because it has at least three vowels and a double letter, even though the letters used by different rules overlap.</li>\n<li><code>jchzalrnumimnmhp</code> is naughty because it has no double letter.</li>\n<li><code>haegwjzuvuyypxyu</code> is naughty because it contains the string <code>xy</code>.</li>\n<li><code>dvszwmarrgswjxmb</code> is naughty because it contains only one vowel.</li>\n</ul>\n<p>How many strings are nice?</p>",
"_____no_output_____"
]
],
[
[
"let input = File.ReadAllLines @\"input/05.txt\"",
"_____no_output_____"
],
[
"let containsAtLeastThreeVowels (s: string) = \n let vowels = [| 'a'; 'e'; 'i'; 'o'; 'u' |]\n s\n |> Seq.filter (fun c -> Array.contains c vowels)\n |> Seq.length >= 3",
"_____no_output_____"
],
[
"let containsDoubleLetter (s: string) = \n s\n |> Seq.windowed 2\n |> Seq.exists (fun arr -> arr.[0] = arr.[1])",
"_____no_output_____"
],
[
"let doesNotContain (arr: string[]) (s: string) =\n arr\n |> Seq.forall (fun item -> item |> s.Contains |> not)",
"_____no_output_____"
],
[
"let isNice (s: string) = \n containsAtLeastThreeVowels s\n && containsDoubleLetter s\n && doesNotContain [| \"ab\"; \"cd\"; \"pq\"; \"xy\" |] s",
"_____no_output_____"
],
[
"#!time\ninput\n|> Seq.filter isNice\n|> Seq.length",
"_____no_output_____"
]
],
[
[
"<h2 id=\"part2\">--- Part Two ---</h2>",
"_____no_output_____"
],
[
"<p>Realizing the error of his ways, Santa has switched to a better model of determining whether a string is naughty or nice. None of the old rules apply, as they are all clearly ridiculous.</p>\n<p>Now, a nice string is one with all of the following properties:</p>\n<ul>\n<li>It contains a pair of any two letters that appears at least twice in the string without overlapping, like <code>xyxy</code> (<code>xy</code>) or <code>aabcdefgaa</code> (<code>aa</code>), but not like <code>aaa</code> (<code>aa</code>, but it overlaps).</li>\n<li>It contains at least one letter which repeats with exactly one letter between them, like <code>xyx</code>, <code>abcdefeghi</code> (<code>efe</code>), or even <code>aaa</code>.</li>\n</ul>\n<p>For example:</p>\n<ul>\n<li><code>qjhvhtzxzqqjkmpb</code> is nice because is has a pair that appears twice (<code>qj</code>) and a letter that repeats with exactly one letter between them (<code>zxz</code>).</li>\n<li><code>xxyxx</code> is nice because it has a pair that appears twice and a letter that repeats with one between, even though the letters used by each rule overlap.</li>\n<li><code>uurcxstgmygtbstg</code> is naughty because it has a pair (<code>tg</code>) but no repeat with a single letter between them.</li>\n<li><code>ieodomkazucvgmuy</code> is naughty because it has a repeating letter with one between (<code>odo</code>), but no pair that appears twice.</li>\n</ul>\n<p>How many strings are nice under these new rules?</p>",
"_____no_output_____"
]
],
[
[
"let containsAtLeastTwoPairs (s: string) = \n s\n |> Seq.windowed 2\n |> Seq.map (fun arr -> $\"{arr.[0]}{arr.[1]}\")\n |> Seq.exists (fun pair -> pair |> s.Split |> Seq.length >= 3)",
"_____no_output_____"
],
[
"let containsRepeatWithOneBetween (s: string) = \n s\n |> Seq.windowed 3\n |> Seq.exists (fun arr -> arr.[0] = arr.[2])",
"_____no_output_____"
],
[
"let isReallyNice (s: string) = \n containsAtLeastTwoPairs s\n && containsRepeatWithOneBetween s",
"_____no_output_____"
],
[
"#!time\ninput\n|> Seq.filter isReallyNice\n|> Seq.length",
"_____no_output_____"
]
],
[
[
"[Prev](Day04.ipynb) | [Next](Day06.ipynb)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e731bb5c7fd251e3fb00b5a7b999a93eb668b6f7 | 18,869 | ipynb | Jupyter Notebook | PJC_aula2.ipynb | angelosqr/Aprendizado | b314cb92860482149a1f4822f80e87705c8dee37 | [
"MIT"
] | null | null | null | PJC_aula2.ipynb | angelosqr/Aprendizado | b314cb92860482149a1f4822f80e87705c8dee37 | [
"MIT"
] | 4 | 2020-08-19T14:29:59.000Z | 2020-08-19T15:05:11.000Z | PJC_aula2.ipynb | angelosqr/Aprendizado | b314cb92860482149a1f4822f80e87705c8dee37 | [
"MIT"
] | null | null | null | 27.14964 | 912 | 0.432985 | [
[
[
"<a href=\"https://colab.research.google.com/github/angelosqr/Aprendizado/blob/master/PJC_aula2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# 8. Repetições (Iterações) Condicionais",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Enquanto for verdade, repete\nnumero = 7\nwhile numero > 0:\n print(numero)\n numero = numero - 1\nprint(\"Acabou de repetir\")",
"7\n6\n5\n4\n3\n2\n1\nAcabou de repetir\n"
],
[
"# Podemos aplicar ao nosso programa\nrepetir = True \nwhile repetir:\n idade = input('Digite sua idade: ')\n idade = int(idade)\n\n if idade >= 18 and idade < 70:\n print(f'Com {idade} anos, seu voto é obrigatório!')\n elif idade >= 16:\n print(f'Com {idade} anos, seu voto é facultativo!')\n else:\n print(f'Com {idade} anos, você NÃO pode votar!')\n\n resposta = input('Deseja repetir? (s/n)')\n if resposta == 'n' or resposta == 'N':\n repetir = False",
"Digite sua idade: 234567\nCom 234567 anos, seu voto é facultativo!\nDeseja repetir? (s/n)N\n"
]
],
[
[
"## 9 Listas de Dados",
"_____no_output_____"
]
],
[
[
"# lista com vários números (idades)\nidades = [12, 47, 78, 8, 18]\n# os valores não precisam ser do mesmo tipo \nmiscelanea = [15, 'biscoito', True, \"teste\", 67.9, '78']\nprint(idades[3])\nprint(miscelanea[1])\n\n# pode modificar um valor na lista\nidades[3] = 25\nprint(idades[3])\n# adicionar elemento ao final lista\nidades.append(77)\nprint(idades)",
"8\nbiscoito\n25\n[12, 47, 78, 25, 18, 77]\n"
],
[
"# pode fazer lista de listas (matrizes)\nlista_de_lista = [[7, 9, 5]]\nprint(lista_de_lista[0][1])",
"9\n"
],
[
"# concatenando listas\nprint(idades + [88, 12])\nprint(idades)\n\n# replicando valores na lista\nlista12 = [2] * 12\nprint(lista12)\nprint(lista12[4])",
"[12, 47, 78, 25, 18, 77, 88, 12]\n[12, 47, 78, 25, 18, 77]\n[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n2\n"
],
[
"numero = 0\n# lista vazia\nnumeros = []\n# preenchendo a lista\nwhile numero < 7:\n numeros.append(numero)\n numero = numero + 2\nprint(numeros)",
"[0, 2, 4, 6]\n"
]
],
[
[
"# 10. Iterações Definidas (for)",
"_____no_output_____"
]
],
[
[
"idades = [12, 47, 78, 8, 18]\n# variável idade assume valores na lista idades\nfor idade in idades:\n print(idade + 2)",
"14\n49\n80\n10\n20\n"
],
[
"for idade in idades:\n if idade >= 18 and idade < 70:\n print(f'Com {idade} anos, seu voto é obrigatório!')\n elif idade >= 16:\n print(f'Com {idade} anos, seu voto é facultativo!')\n else:\n print(f'Com {idade} anos, você NÃO pode votar!')",
"Com 12 anos, você NÃO pode votar!\nCom 47 anos, seu voto é obrigatório!\nCom 78 anos, seu voto é facultativo!\nCom 8 anos, você NÃO pode votar!\nCom 18 anos, seu voto é obrigatório!\n"
],
[
"idades = [12, 47, 78, 8, 18,]\nnomes = ['João', 'Maria', 'Severina', 'Pedro', 'Rebeca']\n\nfor i in range(len(idades)):\n print(f\"{nomes[i]} tem {idades[i]} anos\")",
"João tem 12 anos\nMaria tem 47 anos\nSeverina tem 78 anos\nPedro tem 8 anos\nRebeca tem 18 anos\nJonas tem 43 anos\nRafaela tem 3 anos\n"
],
[
"#length comprimento\nprint(len(nomes)) ",
"6\n"
]
],
[
[
"### Pedindo uma Lista",
"_____no_output_____"
]
],
[
[
"# pedindo uma lista\nn = int(input('Quantos elementos? '))\nlista = []\nfor i in range(n):\n elemento = float(input(f'Digite o {i+1}º elemento: '))\n lista.append(elemento)\n\nprint(lista)",
"_____no_output_____"
]
],
[
[
"## Experimente\n* Tente pedir uma lista com *while* ao invés de *for*",
"_____no_output_____"
],
[
"# 11. Algumas operações com Strings",
"_____no_output_____"
]
],
[
[
"frase = 'Em noite de lua cheia...'\n# que nem as listas!\nprint(frase[0])\nprint(frase[9])\n# índices negativos são válidos em Python (nas listas também)\n# acessam de trás pra frente\nprint(frase[-4])",
"E\nd\na\n"
],
[
"lista = [12, 'China', '8']\npalavra = 'ninja'\n# listas são mutáveis\nlista[1] = 5\nprint(lista)\n# strings são imutáveis\npalavra[1] = 'a'\nprint(palavra)",
"[12, 5, '8']\n"
],
[
"# strings são iteráveis\nfor c in frase:\n print(c)",
"E\nm\n \nn\no\ni\nt\ne\n \nd\ne\n \nl\nu\na\n \nc\nh\ne\ni\na\n.\n.\n.\n"
]
],
[
[
"## Experimente\n\n* Tente usar a função *len* em uma string \n\n",
"_____no_output_____"
],
[
"### Filtrando listas e string",
"_____no_output_____"
]
],
[
[
"# filtrando os números pares da lista\nnumeros = [12, 56, 87, 23, 98, 54, 11, 7]\npares = []\nfor num in numeros:\n if (num % 2) == 0:\n pares.append(num)\nprint(pares)",
"[12, 56, 98, 54]\n"
],
[
"# in pode ser usado para avaliar se algo pertence à lista/string\nprint(13 in pares)\nprint('d' in 'cabra')",
"_____no_output_____"
],
[
"# nova string sem caracteres que são vogais\nfrase = \"o rato roeu a roupa do rei\"\nsemvogais = \"\"\nfor c in frase:\n if not c in 'aeiou':\n semvogais = semvogais + c\n\nprint(semvogais)",
"_____no_output_____"
]
],
[
[
"## Experimente\n\n* Tente modificar o exemplo para funcionar com maiúsculas e minúsculas\n\n",
"_____no_output_____"
],
[
"# Exercícios\n\n1. Faça um programa que recebe uma lista de números reais e exibe o seu maior elemento.\n\n2. Faça um programa que recebe uma lista de números reais e exibe sua média.\n\n\n3. Dada uma lista de números inteiros, faça um programa que responda a soma de todos os números pares na lista e o produto de todos os números ímpares.\n\n4. Faça um programa que receba um número e exiba o [fatorial](https://pt.wikipedia.org/wiki/Fatorial) desse número.\n\n5. Dada uma lista de números inteiros e um número inteiro desejado, responder qual o índice deste número na lista.\n\n6. Faça um programa que verifique se uma string possui caracteres duplicados.\n\n7. Faça um programa que receba uma string que seja uma combinação dos seguintes caracteres: '-', 'a', 't', 'c', 'g'. Eles podem aparecer em qualquer ordem e múltiplas vezes, por exemplo:\n'---agcatg-c-c-a-ttt--'\nA saída do programa deve ser:\n\n 7a) A string de entrada sem os '-' do início. No caso do exemplo: 'agcatg-c-c-a-ttt--'\n\n 7b) A string de entrada sem os '-' do final. No caso do exemplo: '---agcatg-c-c-a-ttt'\n\n 7c) A string de entrada sem os '-' início e do final. No caso do exemplo: 'agcatg-c-c-a-ttt'\n\n## Mini Projetos\n\n#### P3 Implemente um jogo da forca.\n\n#### P4 Implemente um jogo da velha.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e731c5ab46e4163f3c9b14327e053cb91bf94df7 | 265,916 | ipynb | Jupyter Notebook | Project_3/.ipynb_checkpoints/Project 3-checkpoint.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | 2 | 2022-01-25T04:58:58.000Z | 2022-03-24T23:00:13.000Z | Project_3/.ipynb_checkpoints/Project 3-checkpoint.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | 1 | 2021-11-25T00:39:40.000Z | 2021-11-25T00:39:40.000Z | Project_3/.ipynb_checkpoints/Project 3-checkpoint.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | null | null | null | 527.611111 | 71,572 | 0.944941 | [
[
[
"# FIR Filter Class Implementation",
"_____no_output_____"
],
[
"This is the final exercise for the course and will test all the knowledge that you have gathered during this semester. The final exercise consist of developing a series of functions/methods to implement a `FIR` class in Python. For this, three files were provided and are stored in the `Common` directory. A brief description of which function or method needs to be modified in each file is presented here:\n\n```python\nfir.py\n\"Main file where filter implementations are developed, consist of the `FIR` class that you will develop. For this file you need to work on all methods.\"\n```\n\n```python\nfft.py\n\"Auxiliary file which consist of the `FFT` class that will help you to implement the `FIR` class from fir.py and zero_pad_fourier function from auxiliary.py. For this file you need to work on all functions.\"\n```\n\n```python\nauxiliary.py\n\"Auxiliary file, which was created in a previous exercise, and you will be adding two new functions: 1)zero_pad_fourier and 2)zero_pad\"\n```\n\n```python\ncommon_plots.py\n\"Plotting auxiliary function where you will add a method called plot_frequency_response to easily view your filter results\"\n```\n\nIn order to succeed I recommend you to start with the `common_plots.py`, then work with `fft.py` and `auxiliary.py` files and finish with the `fir.py` file.",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.insert(0, '../')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom Common import fir\nfrom Common import fft\nfrom Common import auxiliary\nfrom Common import common_plots",
"_____no_output_____"
],
[
"cplots = common_plots.Plot()\nflt = fir.FIR()\nfast_ft = fft.FFT()",
"_____no_output_____"
]
],
[
[
"## Part 1: Test your `common_plots.py` file\nFor the first part you will test your method `plot_frequency_response` from the class `Plot` with a $sinc$ signal. The following code provides you with a test code using the numpy function `sinc`, be aware that when you implement your `shifted_sinc` method you can't use the numpy implementation of the `sinc` function.",
"_____no_output_____"
]
],
[
[
"fc = 0.20\nBW = 0.04\nM = int(4/BW)\ni = np.arange(0,M,1)\n\nh = np.sinc(2*i*fc)\n\nH = np.fft.fft(h)\nf = np.linspace(0,1,h.shape[0])\n\ncplots.plot_frequency_response(np.absolute(H).reshape(-1,1), f) #Note that this is a two sided spectrum",
"_____no_output_____"
]
],
[
[
"## Part 2: Test your `fft.py` file\nNow we will compare your `FFT` class against numpy, and see if both one sided and two sided results match. First run this test to see if your `fft` function works correctly:",
"_____no_output_____"
]
],
[
[
"N = h.shape[0]\nL = 1024\n\nfc = 0.20\nBW = 0.04\nM = int(4/BW)\ni = np.arange(0,M,1)\nh = np.sinc(2*i*fc)\n\nh_zero_pad = np.append(h,np.zeros(L-N))\n\nH_own_one_side = fast_ft.fft(h_zero_pad, one_sided=True)\nH_own_two_side = fast_ft.fft(h_zero_pad, one_sided=False)\nH_numpy = np.fft.fft(h_zero_pad)\n\nassert np.allclose(H_own_one_side, H_numpy[0:L//2]), \"Your implementation missmatches numpy's, check your code!\"\nprint(\"Great work! One sided implementation rocks!\")\n\nassert np.allclose(H_own_two_side, H_numpy), \"Your implementation missmatches numpy's, check your code!\"\nprint(\"But is better to have a two sided implementation! Excellent work!\")",
"Great work! One sided implementation rocks!\nBut is better to have a two sided implementation! Excellent work!\n"
]
],
[
[
"Now let's run this code and see if the function `ifft` also works well:",
"_____no_output_____"
]
],
[
[
"h_own_two_side = fast_ft.ifft(H_own_two_side).astype('complex')\nh_own_two_side[0] = h_own_two_side[0]/2\n\nplt.plot(np.real(h_own_two_side[0:N]))\nplt.plot(h)\n\nassert np.allclose(h_own_two_side[0:M], h), \"It seems there is an error, check your ifft function\"\nprint(\"You have been doing a great work! Your IFFT is doing an amazing job!\")",
"You have been doing a great work! Your IFFT is doing an amazing job!\n"
]
],
[
[
"## Part 3: Test your `auxiliary.py` file\nAt this moment you can test your `zero_pad_fourier` method and check if both methods `DFT` and `FFT` works. The provided code test against numpy's implementation of the Fast Fourier Transform:",
"_____no_output_____"
]
],
[
[
"fc = 0.20\nBW = 0.04\nM = int(4/BW)\ni = np.arange(0,M,1)\nL = 1024\n\nh = np.sinc(2*i*fc)\nh_zero_pad = np.append(h,np.zeros(L-N))\nH_numpy = np.fft.fft(h_zero_pad).reshape(-1,1)\nf = np.linspace(0,0.5,L//2)\n\nH_dft, f_dft = auxiliary.zero_pad_fourier(h.reshape(-1,1), M, method='DFT')\nH_fft, f_fft = auxiliary.zero_pad_fourier(h.reshape(-1,1), M, method='FFT')\nprint(H_dft.shape)\n\ncplots.plot_frequency_response(np.absolute(H_numpy[0:L//2]), f, label='Numpy implemetation')\ncplots.plot_frequency_response(np.absolute(H_dft), f_dft, label='DFT implemetation')\ncplots.plot_frequency_response(np.absolute(H_fft), f_fft, label='FFT implemetation')\n\nassert np.allclose(np.absolute(H_numpy[0:L//2]), np.absolute(H_fft)), \"Your implementation missmatches numpy's, check your fft implementation.\"\nassert np.allclose(np.absolute(H_numpy[0:L//2]), np.absolute(H_dft[0:-1])), \"Your implementation missmatches numpy's, check your dft implementation\"\nprint(\"Perfect! Both filters are doing an amazing job!\")",
"(513, 1)\nPerfect! Both filters are doing an amazing job!\n"
]
],
[
[
"## Part 4: Test your `fir.py` file\nThis is the last part of the Jupyter Notebook. In here you will test your window filter, then your low, high, pass band, and reject band filters.\n\n### 4.1 Test your window filter\nThe first test we will perform is going to be on our window filters. We will check both `blackman` and `hamming` types and compare it's results for the spectral inversion and spectral reversal calculations.",
"_____no_output_____"
]
],
[
[
"fc = 0.20\nM = 103 #Now we use an odd number\nL = 1024\n\nhamming = flt.window_filter(fc, M, window_type='hamming', normalized=True)\nblackman = flt.window_filter(fc, M, window_type='blackman', normalized=True)\n\n# Spectral inversion \nhamming_inverted = flt.spectral_inversion(hamming)\nblackman_inverted = flt.spectral_inversion(blackman)\n\n# Spectral reversal\nhamming_reversal = flt.spectral_reversal(hamming)\nblackman_reversal = flt.spectral_reversal(blackman)",
"_____no_output_____"
],
[
"# Fourier transformations for Hamming Filters\nH, f_h = auxiliary.zero_pad_fourier(hamming, M)\nH_inv, f_h_inv = auxiliary.zero_pad_fourier(hamming_inverted, M, method='FFT', L=L)\nH_rev, f_h_rev = auxiliary.zero_pad_fourier(hamming_reversal, M, method='FFT', L=L)\n\n# Fourier transformations for Blackman Filters\nB, f_b = auxiliary.zero_pad_fourier(blackman, M)\nB_inv, f_b_inv = auxiliary.zero_pad_fourier(blackman_inverted, M, method='FFT', L=L)\nB_rev, f_b_rev = auxiliary.zero_pad_fourier(blackman_reversal, M, method='FFT', L=L)",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = (15, 5)\n\nplt.subplot(1,3,1)\ncplots.plot_frequency_response(np.absolute(H), f_h, label=\"Hamming\")\ncplots.plot_frequency_response(np.absolute(B), f_b, label=\"Blackman\")\nplt.legend()\nplt.subplot(1,3,2)\ncplots.plot_frequency_response(np.absolute(H_inv), f_h_inv, label=\"Inverted Hamming\")\ncplots.plot_frequency_response(np.absolute(B_inv), f_b_inv, label=\"Inverted Blackman\")\nplt.legend()\nplt.subplot(1,3,3)\ncplots.plot_frequency_response(np.absolute(H_rev), f_h_rev, label=\"Reversal Hamming\")\ncplots.plot_frequency_response(np.absolute(B_rev), f_b_rev, label=\"Reversal Blackman\")\nplt.legend()\nplt.subplots_adjust(hspace=0.3)",
"_____no_output_____"
]
],
[
[
"### 4.2 Test your low pass filter\nNow test your low pass filters. For this case we will use both `hamming` and `blackman` windows.",
"_____no_output_____"
]
],
[
[
"fc = 0.2\nM = 101\nh_lp = flt.low_pass_filter(fc, M, window_type='hamming')\nH_lp, fh_lp = auxiliary.zero_pad_fourier(h_lp, M)\n\nb_lp = flt.low_pass_filter(fc, M, window_type='blackman')\nB_lp, fb_lp = auxiliary.zero_pad_fourier(b_lp, M)\n\nplt.rcParams[\"figure.figsize\"] = (7, 5)\ncplots.plot_frequency_response(np.absolute(H_lp), fh_lp, label='Hamming')\ncplots.plot_frequency_response(np.absolute(B_lp), fb_lp, label='Blackman')\nplt.legend()\nplt.grid(\"on\")",
"_____no_output_____"
]
],
[
[
"### 4.3 Test your high pass filter\nIt is time to test your high pass filter, besides testing again our `hamming` and `blackman` windows, we are also going to test the `spectral_inversion` and `spectral_reversal` methods.",
"_____no_output_____"
]
],
[
[
"fc = 0.17\nM = 200\n\nh_hp = flt.high_pass_filter(fc, M, method='spectral_inversion', window_type='hamming')\nH_hp, fh_hp = auxiliary.zero_pad_fourier(h_hp, M)\n\nb_hp = flt.high_pass_filter(fc, M, method='spectral_reversal', window_type='blackman')\nB_hp, fb_hp = auxiliary.zero_pad_fourier(b_hp, M)\n\ncplots.plot_frequency_response(np.absolute(H_hp), fh_hp, label='Spectral Inversion-Hamming')\ncplots.plot_frequency_response(np.absolute(B_hp), fb_hp, label='Spectral Reversal-Blackman')\nplt.legend()\nplt.grid(\"on\")",
"_____no_output_____"
]
],
[
[
"### 4.2 Test your band/reject band filter\nFinally we will create a band pass/reject filter and test our `band_filter` function like this:",
"_____no_output_____"
]
],
[
[
"fc1 = 0.17\nfc2 = 0.33\n\nh = flt.band_filter(fc1, fc2, M, band_type='pass')\nH, f = auxiliary.zero_pad_fourier(h, M)\n\ng = flt.band_filter(fc1, fc2, M, band_type='reject')\nG, f = auxiliary.zero_pad_fourier(g, M)\n\ncplots.plot_frequency_response(np.absolute(H), f, label='Band Pass Filter')\ncplots.plot_frequency_response(np.absolute(G), f, label='Band Reject Filter')\nplt.legend()\nplt.grid(\"on\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e731c9cdf99da0e68f0450b6832a20eef8231542 | 147,465 | ipynb | Jupyter Notebook | files/weather_predictions.ipynb | escience-academy/2021-07-05-intro-deep-learning | c490129104aa7b1093a7d061dfa6509e819e6b7e | [
"CC-BY-4.0"
] | null | null | null | files/weather_predictions.ipynb | escience-academy/2021-07-05-intro-deep-learning | c490129104aa7b1093a7d061dfa6509e819e6b7e | [
"CC-BY-4.0"
] | 3 | 2021-06-17T15:42:57.000Z | 2021-07-02T11:25:46.000Z | files/weather_predictions.ipynb | escience-academy/2021-07-05-intro-deep-learning | c490129104aa7b1093a7d061dfa6509e819e6b7e | [
"CC-BY-4.0"
] | 1 | 2021-07-06T07:36:01.000Z | 2021-07-06T07:36:01.000Z | 73.292744 | 26,560 | 0.642552 | [
[
[
"import pandas as pd\nfrom tensorflow import keras",
"_____no_output_____"
],
[
"data = pd.read_csv(\"weather_prediction_dataset_light.csv\")\ndata.head()",
"_____no_output_____"
],
[
"# when data is in different path\n#import os\n\n#filename = os.path.join(\"path....\", \"weather_prediction_dataset_light.csv\")\n#data = pd.read_csv(filename)",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"data.columns",
"_____no_output_____"
],
[
"import re\n\npattern = r'[A-Z_]'\nfeature_types = {re.sub(pattern, '', col) for col in data.columns}\nfeature_types",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
],
[
"# define the data\nX_data = data.loc[:365*3].drop(columns=['DATE', 'MONTH'])\n\n# define labels (sunshine hours for the next day)\ny_data = data.loc[1:(365*3 + 1)][\"BASEL_sunshine\"]\n\nX_data.shape, y_data.shape",
"_____no_output_____"
],
[
"X_data.head()",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\n# 70% data --> training set\n# 15% --> validation set\n# 15% --> test set\nX_train, X_not_train, y_train, y_not_train = train_test_split(X_data, y_data,\n test_size=0.3,\n random_state=0)\nX_val, X_test, y_val, y_test = train_test_split = train_test_split(X_not_train,\n y_not_train,\n test_size=0.5,\n random_state=0)\n",
"_____no_output_____"
],
[
"X_train.shape, X_val.shape, X_test.shape",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"y_train.head()",
"_____no_output_____"
]
],
[
[
"## Build a neural network",
"_____no_output_____"
]
],
[
[
"# input layer: keras.layers.Input(shape(,))\n\n# dense layer: keras.layers.Dense(#of_nodes, activation=\"relu\")(layer_before)\n\n# output layer ?",
"_____no_output_____"
],
[
"def create_nn(nodes1, nodes2):\n inputs = keras.layers.Input(shape=(X_data.shape[1], ))\n\n layers_dense = keras.layers.Dense(nodes1, activation='relu')(inputs)\n layers_dense = keras.layers.Dense(nodes2, activation='relu')(layers_dense)\n\n outputs = keras.layers.Dense(1)(layers_dense)\n\n return keras.models.Model(inputs=inputs, outputs=outputs,\n name='sunshine_preditor')\n\nmodel = create_nn(100, 50)\nmodel.summary()",
"Model: \"sunshine_preditor\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_3 (InputLayer) [(None, 89)] 0 \n_________________________________________________________________\ndense_5 (Dense) (None, 100) 9000 \n_________________________________________________________________\ndense_6 (Dense) (None, 50) 5050 \n_________________________________________________________________\ndense_7 (Dense) (None, 1) 51 \n=================================================================\nTotal params: 14,101\nTrainable params: 14,101\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"model.compile(loss=keras.losses.MeanSquaredError(),\n optimizer=keras.optimizers.Adam(),\n metrics=[keras.metrics.RootMeanSquaredError()])",
"_____no_output_____"
],
[
"history = model.fit(X_train, y_train,\n epochs=200)",
"Epoch 1/200\n24/24 [==============================] - 3s 12ms/step - loss: 27.2230 - root_mean_squared_error: 5.1595\nEpoch 2/200\n24/24 [==============================] - 0s 10ms/step - loss: 14.7104 - root_mean_squared_error: 3.8332\nEpoch 3/200\n24/24 [==============================] - 0s 9ms/step - loss: 13.2647 - root_mean_squared_error: 3.6413\nEpoch 4/200\n24/24 [==============================] - 0s 7ms/step - loss: 12.2870 - root_mean_squared_error: 3.5046\nEpoch 5/200\n24/24 [==============================] - 0s 11ms/step - loss: 11.7683 - root_mean_squared_error: 3.4294\nEpoch 6/200\n24/24 [==============================] - 0s 14ms/step - loss: 11.3879 - root_mean_squared_error: 3.3733 0s - loss: 11.4138 - root_mean_squared_error: \nEpoch 7/200\n24/24 [==============================] - 0s 16ms/step - loss: 11.4527 - root_mean_squared_error: 3.3829 0s - loss: 12.4300 - root_mean_squared_e\nEpoch 8/200\n24/24 [==============================] - 0s 9ms/step - loss: 10.8690 - root_mean_squared_error: 3.2966\nEpoch 9/200\n24/24 [==============================] - 0s 14ms/step - loss: 11.0180 - root_mean_squared_error: 3.3184\nEpoch 10/200\n24/24 [==============================] - 0s 11ms/step - loss: 10.4049 - root_mean_squared_error: 3.2246\nEpoch 11/200\n24/24 [==============================] - 0s 11ms/step - loss: 12.2580 - root_mean_squared_error: 3.4991\nEpoch 12/200\n24/24 [==============================] - 0s 10ms/step - loss: 10.4669 - root_mean_squared_error: 3.2347\nEpoch 13/200\n24/24 [==============================] - 0s 7ms/step - loss: 11.8834 - root_mean_squared_error: 3.4468\nEpoch 14/200\n24/24 [==============================] - 0s 8ms/step - loss: 10.3388 - root_mean_squared_error: 3.2140\nEpoch 15/200\n24/24 [==============================] - 0s 6ms/step - loss: 10.2743 - root_mean_squared_error: 3.2041\nEpoch 16/200\n24/24 [==============================] - 0s 8ms/step - loss: 9.5548 - root_mean_squared_error: 3.0906A: 0s - loss: 9.5702 - root_mean_squared_error: 3.093\nEpoch 17/200\n24/24 [==============================] - 0s 6ms/step - loss: 8.8725 - root_mean_squared_error: 2.9775\nEpoch 18/200\n24/24 [==============================] - 0s 8ms/step - loss: 8.5386 - root_mean_squared_error: 2.9155\nEpoch 19/200\n24/24 [==============================] - 0s 5ms/step - loss: 10.1637 - root_mean_squared_error: 3.1871: 0s - loss: 10.1831 - root_mean_squared_error: 3.19\nEpoch 20/200\n24/24 [==============================] - 0s 6ms/step - loss: 9.6788 - root_mean_squared_error: 3.1104\nEpoch 21/200\n24/24 [==============================] - 0s 12ms/step - loss: 8.0356 - root_mean_squared_error: 2.8277: 0s - loss: 6.5965 - root_mean_squared_error: 2\nEpoch 22/200\n24/24 [==============================] - 0s 5ms/step - loss: 9.6362 - root_mean_squared_error: 3.1035\nEpoch 23/200\n24/24 [==============================] - 0s 11ms/step - loss: 8.5264 - root_mean_squared_error: 2.9181: 0s - loss: 8.4690 - root_mean_squared_error: 2.90\nEpoch 24/200\n24/24 [==============================] - 0s 8ms/step - loss: 8.6168 - root_mean_squared_error: 2.9345\nEpoch 25/200\n24/24 [==============================] - 0s 13ms/step - loss: 8.8380 - root_mean_squared_error: 2.9693\nEpoch 26/200\n24/24 [==============================] - 0s 6ms/step - loss: 8.2664 - root_mean_squared_error: 2.8742\nEpoch 27/200\n24/24 [==============================] - 0s 8ms/step - loss: 8.6525 - root_mean_squared_error: 2.9402\nEpoch 28/200\n24/24 [==============================] - 0s 8ms/step - loss: 7.9456 - root_mean_squared_error: 2.8182\nEpoch 29/200\n24/24 [==============================] - 0s 7ms/step - loss: 8.4265 - root_mean_squared_error: 2.9025\nEpoch 30/200\n24/24 [==============================] - 0s 6ms/step - loss: 7.9240 - root_mean_squared_error: 2.8126\nEpoch 31/200\n24/24 [==============================] - 0s 18ms/step - loss: 7.2435 - root_mean_squared_error: 2.6882\nEpoch 32/200\n24/24 [==============================] - 0s 9ms/step - loss: 6.8450 - root_mean_squared_error: 2.6097\nEpoch 33/200\n24/24 [==============================] - 0s 6ms/step - loss: 6.7289 - root_mean_squared_error: 2.5906\nEpoch 34/200\n24/24 [==============================] - 0s 6ms/step - loss: 7.5339 - root_mean_squared_error: 2.7433\nEpoch 35/200\n24/24 [==============================] - 0s 9ms/step - loss: 7.3928 - root_mean_squared_error: 2.7186\nEpoch 36/200\n24/24 [==============================] - 0s 8ms/step - loss: 8.0406 - root_mean_squared_error: 2.8293\nEpoch 37/200\n24/24 [==============================] - 0s 7ms/step - loss: 7.8706 - root_mean_squared_error: 2.8021\nEpoch 38/200\n24/24 [==============================] - 0s 5ms/step - loss: 8.3537 - root_mean_squared_error: 2.8848\nEpoch 39/200\n24/24 [==============================] - 0s 5ms/step - loss: 8.0032 - root_mean_squared_error: 2.8270\nEpoch 40/200\n24/24 [==============================] - 0s 5ms/step - loss: 8.5721 - root_mean_squared_error: 2.9262\nEpoch 41/200\n24/24 [==============================] - 0s 5ms/step - loss: 7.1429 - root_mean_squared_error: 2.6707\nEpoch 42/200\n24/24 [==============================] - 0s 6ms/step - loss: 7.5474 - root_mean_squared_error: 2.7430\nEpoch 43/200\n24/24 [==============================] - 0s 7ms/step - loss: 6.8967 - root_mean_squared_error: 2.6256\nEpoch 44/200\n24/24 [==============================] - 0s 8ms/step - loss: 6.1872 - root_mean_squared_error: 2.4850\nEpoch 45/200\n24/24 [==============================] - ETA: 0s - loss: 6.1685 - root_mean_squared_error: 2.481 - 0s 11ms/step - loss: 6.2218 - root_mean_squared_error: 2.4923\nEpoch 46/200\n24/24 [==============================] - 0s 10ms/step - loss: 6.3732 - root_mean_squared_error: 2.5238: 0s - loss: 6.4089 - root_mean_squared_error: 2.53\nEpoch 47/200\n24/24 [==============================] - 0s 9ms/step - loss: 6.3112 - root_mean_squared_error: 2.5114\nEpoch 48/200\n24/24 [==============================] - 0s 7ms/step - loss: 6.0971 - root_mean_squared_error: 2.4685\nEpoch 49/200\n24/24 [==============================] - 0s 8ms/step - loss: 6.3753 - root_mean_squared_error: 2.5234\nEpoch 50/200\n24/24 [==============================] - 0s 7ms/step - loss: 6.1743 - root_mean_squared_error: 2.4838\nEpoch 51/200\n24/24 [==============================] - 0s 7ms/step - loss: 6.5827 - root_mean_squared_error: 2.5636\nEpoch 52/200\n24/24 [==============================] - 0s 9ms/step - loss: 6.0049 - root_mean_squared_error: 2.4494\nEpoch 53/200\n24/24 [==============================] - 0s 6ms/step - loss: 5.8056 - root_mean_squared_error: 2.4088\nEpoch 54/200\n24/24 [==============================] - 0s 6ms/step - loss: 5.9763 - root_mean_squared_error: 2.4441\nEpoch 55/200\n24/24 [==============================] - 0s 6ms/step - loss: 5.8334 - root_mean_squared_error: 2.4149\nEpoch 56/200\n24/24 [==============================] - 0s 6ms/step - loss: 5.7714 - root_mean_squared_error: 2.4008\nEpoch 57/200\n24/24 [==============================] - 0s 5ms/step - loss: 5.4908 - root_mean_squared_error: 2.3418\nEpoch 58/200\n24/24 [==============================] - 0s 6ms/step - loss: 5.5620 - root_mean_squared_error: 2.3554\nEpoch 59/200\n24/24 [==============================] - 0s 6ms/step - loss: 6.0284 - root_mean_squared_error: 2.4531\nEpoch 60/200\n24/24 [==============================] - 0s 6ms/step - loss: 6.2407 - root_mean_squared_error: 2.4941\nEpoch 61/200\n24/24 [==============================] - ETA: 0s - loss: 5.4121 - root_mean_squared_error: 2.325 - 0s 5ms/step - loss: 5.4170 - root_mean_squared_error: 2.3263\nEpoch 62/200\n24/24 [==============================] - 0s 4ms/step - loss: 5.9763 - root_mean_squared_error: 2.4434\nEpoch 63/200\n24/24 [==============================] - 0s 8ms/step - loss: 5.3807 - root_mean_squared_error: 2.3129\nEpoch 64/200\n24/24 [==============================] - 0s 8ms/step - loss: 6.3204 - root_mean_squared_error: 2.5100\nEpoch 65/200\n24/24 [==============================] - 0s 7ms/step - loss: 5.2620 - root_mean_squared_error: 2.2914\nEpoch 66/200\n24/24 [==============================] - 0s 9ms/step - loss: 4.9157 - root_mean_squared_error: 2.2130\nEpoch 67/200\n24/24 [==============================] - 0s 7ms/step - loss: 5.2682 - root_mean_squared_error: 2.2949A: 0s - loss: 5.1984 - root_mean_squared_error: 2.27\nEpoch 68/200\n24/24 [==============================] - 0s 7ms/step - loss: 4.5765 - root_mean_squared_error: 2.1382\nEpoch 69/200\n24/24 [==============================] - 0s 7ms/step - loss: 4.5182 - root_mean_squared_error: 2.1246\nEpoch 70/200\n24/24 [==============================] - 0s 6ms/step - loss: 4.4604 - root_mean_squared_error: 2.1109\nEpoch 71/200\n24/24 [==============================] - 0s 9ms/step - loss: 4.5824 - root_mean_squared_error: 2.1403\nEpoch 72/200\n24/24 [==============================] - 0s 5ms/step - loss: 4.4079 - root_mean_squared_error: 2.0988\nEpoch 73/200\n24/24 [==============================] - 0s 5ms/step - loss: 4.4728 - root_mean_squared_error: 2.1132\nEpoch 74/200\n24/24 [==============================] - 0s 7ms/step - loss: 4.4231 - root_mean_squared_error: 2.1005\nEpoch 75/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.7631 - root_mean_squared_error: 1.9328\nEpoch 76/200\n24/24 [==============================] - 0s 6ms/step - loss: 4.4130 - root_mean_squared_error: 2.0975\nEpoch 77/200\n24/24 [==============================] - 0s 5ms/step - loss: 4.2097 - root_mean_squared_error: 2.0481\nEpoch 78/200\n24/24 [==============================] - 0s 6ms/step - loss: 4.0496 - root_mean_squared_error: 2.0119\nEpoch 79/200\n24/24 [==============================] - 0s 6ms/step - loss: 4.3688 - root_mean_squared_error: 2.0895\nEpoch 80/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.5824 - root_mean_squared_error: 1.8920\nEpoch 81/200\n24/24 [==============================] - 0s 6ms/step - loss: 4.3807 - root_mean_squared_error: 2.0902\nEpoch 82/200\n24/24 [==============================] - 0s 5ms/step - loss: 3.9846 - root_mean_squared_error: 1.9940\nEpoch 83/200\n24/24 [==============================] - 0s 5ms/step - loss: 3.7896 - root_mean_squared_error: 1.9462\nEpoch 84/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.2072 - root_mean_squared_error: 1.7904\nEpoch 85/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.3473 - root_mean_squared_error: 1.8236\nEpoch 86/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.7313 - root_mean_squared_error: 1.9310\nEpoch 87/200\n24/24 [==============================] - 0s 5ms/step - loss: 3.9635 - root_mean_squared_error: 1.9906\nEpoch 88/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.7019 - root_mean_squared_error: 1.9213\nEpoch 89/200\n24/24 [==============================] - 0s 5ms/step - loss: 3.6529 - root_mean_squared_error: 1.9103\nEpoch 90/200\n24/24 [==============================] - 0s 9ms/step - loss: 3.4758 - root_mean_squared_error: 1.8642\nEpoch 91/200\n24/24 [==============================] - 0s 8ms/step - loss: 3.3686 - root_mean_squared_error: 1.8342\nEpoch 92/200\n24/24 [==============================] - 0s 9ms/step - loss: 3.5707 - root_mean_squared_error: 1.8884\nEpoch 93/200\n24/24 [==============================] - 0s 9ms/step - loss: 3.4410 - root_mean_squared_error: 1.8538\nEpoch 94/200\n24/24 [==============================] - 0s 8ms/step - loss: 3.4921 - root_mean_squared_error: 1.8678\nEpoch 95/200\n24/24 [==============================] - 0s 8ms/step - loss: 3.5253 - root_mean_squared_error: 1.8762\nEpoch 96/200\n24/24 [==============================] - 0s 9ms/step - loss: 2.9702 - root_mean_squared_error: 1.7217\nEpoch 97/200\n24/24 [==============================] - 0s 8ms/step - loss: 2.8037 - root_mean_squared_error: 1.6718\nEpoch 98/200\n24/24 [==============================] - 0s 8ms/step - loss: 3.6633 - root_mean_squared_error: 1.9128\nEpoch 99/200\n24/24 [==============================] - 0s 8ms/step - loss: 3.0015 - root_mean_squared_error: 1.7314\nEpoch 100/200\n24/24 [==============================] - 0s 12ms/step - loss: 2.8856 - root_mean_squared_error: 1.6971\nEpoch 101/200\n24/24 [==============================] - 0s 9ms/step - loss: 2.6448 - root_mean_squared_error: 1.6238\nEpoch 102/200\n24/24 [==============================] - 0s 13ms/step - loss: 2.8012 - root_mean_squared_error: 1.6722\nEpoch 103/200\n24/24 [==============================] - 0s 9ms/step - loss: 3.1454 - root_mean_squared_error: 1.7718A: 0s - loss: 2.9118 - root_mean_squared_error: 1.70\nEpoch 104/200\n24/24 [==============================] - 0s 8ms/step - loss: 4.0928 - root_mean_squared_error: 2.0190\nEpoch 105/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.4361 - root_mean_squared_error: 1.8480\nEpoch 106/200\n24/24 [==============================] - 0s 5ms/step - loss: 4.0018 - root_mean_squared_error: 1.9987\nEpoch 107/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.7391 - root_mean_squared_error: 1.6537\nEpoch 108/200\n24/24 [==============================] - 0s 9ms/step - loss: 2.5697 - root_mean_squared_error: 1.5991\nEpoch 109/200\n24/24 [==============================] - 0s 8ms/step - loss: 3.0087 - root_mean_squared_error: 1.7338\nEpoch 110/200\n24/24 [==============================] - 0s 9ms/step - loss: 2.6525 - root_mean_squared_error: 1.6256\nEpoch 111/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.4548 - root_mean_squared_error: 1.5660\nEpoch 112/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.7169 - root_mean_squared_error: 1.6472\nEpoch 113/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.4101 - root_mean_squared_error: 1.5480\nEpoch 114/200\n24/24 [==============================] - 0s 8ms/step - loss: 2.2374 - root_mean_squared_error: 1.4905\nEpoch 115/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.8826 - root_mean_squared_error: 1.6960\nEpoch 116/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.5127 - root_mean_squared_error: 1.5840\nEpoch 117/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.5928 - root_mean_squared_error: 1.6097\nEpoch 118/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.2815 - root_mean_squared_error: 1.5088\nEpoch 119/200\n24/24 [==============================] - 0s 5ms/step - loss: 3.2107 - root_mean_squared_error: 1.7902\nEpoch 120/200\n24/24 [==============================] - 0s 6ms/step - loss: 3.1599 - root_mean_squared_error: 1.7771\nEpoch 121/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.6729 - root_mean_squared_error: 1.6341\nEpoch 122/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.2869 - root_mean_squared_error: 1.5108\nEpoch 123/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.3879 - root_mean_squared_error: 1.5444\nEpoch 124/200\n24/24 [==============================] - 0s 8ms/step - loss: 2.2245 - root_mean_squared_error: 1.4890A: 0s - loss: 1.9280 - root_mean_squared_error: 1.3\nEpoch 125/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.3323 - root_mean_squared_error: 1.5259\nEpoch 126/200\n24/24 [==============================] - 0s 7ms/step - loss: 2.6624 - root_mean_squared_error: 1.6294\nEpoch 127/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.6285 - root_mean_squared_error: 1.6196\nEpoch 128/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.6207 - root_mean_squared_error: 1.6179\nEpoch 129/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.3150 - root_mean_squared_error: 1.5209\nEpoch 130/200\n24/24 [==============================] - 0s 7ms/step - loss: 2.1035 - root_mean_squared_error: 1.4477\nEpoch 131/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.3137 - root_mean_squared_error: 1.5188\nEpoch 132/200\n24/24 [==============================] - 0s 6ms/step - loss: 2.1150 - root_mean_squared_error: 1.4527\nEpoch 133/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.9898 - root_mean_squared_error: 1.4073\nEpoch 134/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.9032 - root_mean_squared_error: 1.3791\nEpoch 135/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.6425 - root_mean_squared_error: 1.2800\nEpoch 136/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.9231 - root_mean_squared_error: 1.3832\nEpoch 137/200\n24/24 [==============================] - 0s 4ms/step - loss: 2.3540 - root_mean_squared_error: 1.5339\nEpoch 138/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.8941 - root_mean_squared_error: 1.3753\nEpoch 139/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.5289 - root_mean_squared_error: 1.2348\nEpoch 140/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.5046 - root_mean_squared_error: 1.2250\nEpoch 141/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.6569 - root_mean_squared_error: 1.2862\nEpoch 142/200\n24/24 [==============================] - 0s 7ms/step - loss: 1.4315 - root_mean_squared_error: 1.1944\nEpoch 143/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.3159 - root_mean_squared_error: 1.1442\nEpoch 144/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.6762 - root_mean_squared_error: 1.2933\nEpoch 145/200\n24/24 [==============================] - 0s 5ms/step - loss: 2.0566 - root_mean_squared_error: 1.4207\nEpoch 146/200\n24/24 [==============================] - 0s 7ms/step - loss: 1.5630 - root_mean_squared_error: 1.2484\nEpoch 147/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.5365 - root_mean_squared_error: 1.2362\nEpoch 148/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.5571 - root_mean_squared_error: 1.2447\nEpoch 149/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.3509 - root_mean_squared_error: 1.1593A: 0s - loss: 1.2413 - root_mean_squared_error: 1.11\nEpoch 150/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.5716 - root_mean_squared_error: 1.2526\nEpoch 151/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.7749 - root_mean_squared_error: 1.3302\nEpoch 152/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.8114 - root_mean_squared_error: 1.3451\nEpoch 153/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.5748 - root_mean_squared_error: 1.2520\nEpoch 154/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.2796 - root_mean_squared_error: 1.1305\nEpoch 155/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.1507 - root_mean_squared_error: 1.0706\nEpoch 156/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.3523 - root_mean_squared_error: 1.1623\nEpoch 157/200\n24/24 [==============================] - 0s 3ms/step - loss: 1.2095 - root_mean_squared_error: 1.0971\nEpoch 158/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.3649 - root_mean_squared_error: 1.1679\nEpoch 159/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.2580 - root_mean_squared_error: 1.1211\nEpoch 160/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.6157 - root_mean_squared_error: 1.2703\nEpoch 161/200\n24/24 [==============================] - 0s 3ms/step - loss: 1.2608 - root_mean_squared_error: 1.1200\nEpoch 162/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.3232 - root_mean_squared_error: 1.1491\nEpoch 163/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.4587 - root_mean_squared_error: 1.2067\nEpoch 164/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.4918 - root_mean_squared_error: 1.2196\nEpoch 165/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.9832 - root_mean_squared_error: 1.4061\nEpoch 166/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.9307 - root_mean_squared_error: 1.3883\nEpoch 167/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.8537 - root_mean_squared_error: 1.3581\nEpoch 168/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.8293 - root_mean_squared_error: 1.3520\nEpoch 169/200\n24/24 [==============================] - 0s 7ms/step - loss: 1.4127 - root_mean_squared_error: 1.1877\nEpoch 170/200\n24/24 [==============================] - 0s 10ms/step - loss: 1.2065 - root_mean_squared_error: 1.0975: 0s - loss: 1.1753 - root_mean_squared_error: 1.\nEpoch 171/200\n24/24 [==============================] - 0s 10ms/step - loss: 0.9025 - root_mean_squared_error: 0.9474: 0s - loss: 0.8183 - root_mean_squared_error: 0.901 - ETA: 0s - loss: 0.8383 - root_mean_squared_error: 0.91\nEpoch 172/200\n24/24 [==============================] - 0s 10ms/step - loss: 1.2125 - root_mean_squared_error: 1.0992\nEpoch 173/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.2229 - root_mean_squared_error: 1.1013\nEpoch 174/200\n24/24 [==============================] - 0s 6ms/step - loss: 1.1859 - root_mean_squared_error: 1.0885\nEpoch 175/200\n24/24 [==============================] - 0s 3ms/step - loss: 1.2073 - root_mean_squared_error: 1.0983\nEpoch 176/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.3609 - root_mean_squared_error: 1.1620\nEpoch 177/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.9139 - root_mean_squared_error: 0.9549\nEpoch 178/200\n24/24 [==============================] - 0s 3ms/step - loss: 1.3442 - root_mean_squared_error: 1.1570\nEpoch 179/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.9914 - root_mean_squared_error: 0.9940\nEpoch 180/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.2153 - root_mean_squared_error: 1.1006\nEpoch 181/200\n24/24 [==============================] - 0s 5ms/step - loss: 0.7565 - root_mean_squared_error: 0.8692\nEpoch 182/200\n24/24 [==============================] - 0s 5ms/step - loss: 0.8556 - root_mean_squared_error: 0.9246\nEpoch 183/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.7708 - root_mean_squared_error: 0.8767\nEpoch 184/200\n24/24 [==============================] - 0s 6ms/step - loss: 0.8714 - root_mean_squared_error: 0.9330\nEpoch 185/200\n24/24 [==============================] - 0s 5ms/step - loss: 1.0666 - root_mean_squared_error: 1.0314\nEpoch 186/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.8501 - root_mean_squared_error: 0.9208\nEpoch 187/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.9813 - root_mean_squared_error: 0.9896\nEpoch 188/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.8291 - root_mean_squared_error: 0.9097\nEpoch 189/200\n24/24 [==============================] - 0s 4ms/step - loss: 1.0918 - root_mean_squared_error: 1.0403\nEpoch 190/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.7473 - root_mean_squared_error: 0.8628\nEpoch 191/200\n24/24 [==============================] - 0s 2ms/step - loss: 0.8349 - root_mean_squared_error: 0.9129\nEpoch 192/200\n24/24 [==============================] - 0s 3ms/step - loss: 0.7383 - root_mean_squared_error: 0.8584\nEpoch 193/200\n24/24 [==============================] - 0s 3ms/step - loss: 0.8493 - root_mean_squared_error: 0.9182\nEpoch 194/200\n24/24 [==============================] - 0s 3ms/step - loss: 0.9978 - root_mean_squared_error: 0.9983\nEpoch 195/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.8803 - root_mean_squared_error: 0.9371\nEpoch 196/200\n24/24 [==============================] - 0s 3ms/step - loss: 0.8118 - root_mean_squared_error: 0.9002A: 0s - loss: 0.8110 - root_mean_squared_error: 0.899\nEpoch 197/200\n24/24 [==============================] - 0s 3ms/step - loss: 0.7620 - root_mean_squared_error: 0.8695\nEpoch 198/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.8945 - root_mean_squared_error: 0.9449\nEpoch 199/200\n24/24 [==============================] - 0s 4ms/step - loss: 0.6630 - root_mean_squared_error: 0.8127\nEpoch 200/200\n24/24 [==============================] - ETA: 0s - loss: 0.5383 - root_mean_squared_error: 0.733 - 0s 3ms/step - loss: 0.5435 - root_mean_squared_error: 0.7367\n"
],
[
"import seaborn as sns\n\nhistory_df = pd.DataFrame.from_dict(history.history)\nsns.lineplot(data=history_df['root_mean_squared_error'])",
"_____no_output_____"
],
[
"y_predicted = model.predict(X_train)",
"_____no_output_____"
],
[
"y_predicted.shape",
"_____no_output_____"
],
[
"y_predicted[:10]",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt\n\nplt.scatter(y_predicted, y_train, alpha=0.5)\nplt.axis(\"equal\")\nplt.xlabel(\"predicted sunshine\")\nplt.ylabel(\"true sunshine\")",
"_____no_output_____"
],
[
"y_test_predicted = model.predict(X_test)\n\nplt.scatter(y_test_predicted, y_test, alpha=0.5)\nplt.axis(\"equal\")\nplt.xlabel(\"predicted sunshine\")\nplt.ylabel(\"true sunshine\")",
"_____no_output_____"
],
[
"model.layers",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"sunshine_preditor\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_3 (InputLayer) [(None, 89)] 0 \n_________________________________________________________________\ndense_5 (Dense) (None, 100) 9000 \n_________________________________________________________________\ndense_6 (Dense) (None, 50) 5050 \n_________________________________________________________________\ndense_7 (Dense) (None, 1) 51 \n=================================================================\nTotal params: 14,101\nTrainable params: 14,101\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"#model.weights",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e731ccc0f55ce4bcac84d2080a9f8d9b5667a027 | 291,133 | ipynb | Jupyter Notebook | notebooks/PyTorch.ipynb | Fabien-DS/DSA_Sentiment | 27c68909d87aea9ec034792a376f4cd9be10feff | [
"RSA-MD"
] | 1 | 2021-05-08T16:32:01.000Z | 2021-05-08T16:32:01.000Z | notebooks/PyTorch.ipynb | Fabien-DS/DSA_Sentiment | 27c68909d87aea9ec034792a376f4cd9be10feff | [
"RSA-MD"
] | null | null | null | notebooks/PyTorch.ipynb | Fabien-DS/DSA_Sentiment | 27c68909d87aea9ec034792a376f4cd9be10feff | [
"RSA-MD"
] | null | null | null | 59.366436 | 41,336 | 0.671727 | [
[
[
"# test pyTorch",
"_____no_output_____"
],
[
"source : **Twitter-roBERTa-base for Sentiment Analysis**\n\nsur [Hugging Face ](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment)",
"_____no_output_____"
]
],
[
[
"!pwd",
"/mnt/notebooks\n"
],
[
"import torch\ntorch.cuda.is_available()",
"_____no_output_____"
],
[
"!pip install transformers",
"Requirement already satisfied: transformers in /usr/local/lib/python3.8/dist-packages (4.5.1)\nRequirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: tokenizers<0.11,>=0.10.1 in /usr/local/lib/python3.8/dist-packages (from transformers) (0.10.2)\nRequirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from transformers) (20.9)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.8/dist-packages (from transformers) (4.56.2)\nRequirement already satisfied: requests in /usr/local/lib/python3.8/dist-packages (from transformers) (2.25.1)\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.8/dist-packages (from transformers) (1.20.1)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.8/dist-packages (from transformers) (2020.11.13)\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.8/dist-packages (from transformers) (0.0.45)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (4.0.0)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (2020.12.5)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (1.26.3)\nRequirement already satisfied: joblib in /usr/local/lib/python3.8/dist-packages (from sacremoses->transformers) (1.0.1)\nRequirement already satisfied: six in /usr/local/lib/python3.8/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: click in /usr/local/lib/python3.8/dist-packages (from sacremoses->transformers) (7.1.2)\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\n"
],
[
"from transformers import AutoModelForSequenceClassification\nfrom transformers import TFAutoModelForSequenceClassification\nfrom transformers import AutoTokenizer\nimport numpy as np\nfrom scipy.special import softmax\nimport csv\nimport urllib.request\n",
"_____no_output_____"
],
[
"\n# Preprocess text (username and link placeholders)\ndef preprocess(text):\n new_text = []\n\n\n for t in text.split(\" \"):\n t = '@user' if t.startswith('@') and len(t) > 1 else t\n t = 'http' if t.startswith('http') else t\n new_text.append(t)\n return \" \".join(new_text)\n",
"_____no_output_____"
],
[
"# Tasks:\n# emoji, emotion, hate, irony, offensive, sentiment\n# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary\n\n#task='sentiment'\n#MODEL = f\"cardiffnlp/twitter-roberta-base-{task}\"",
"_____no_output_____"
],
[
"from transformers import AutoModelForSequenceClassification\nfrom transformers import AutoTokenizer\nfrom transformers import AutoTokenizer, AutoConfig\nfrom transformers import pipeline",
"_____no_output_____"
]
],
[
[
"**ATTENTION** A ne lancer qu'une fois au premier téléchargement du modèle :",
"_____no_output_____"
]
],
[
[
"task='sentiment'\nMODEL = f\"cardiffnlp/twitter-roberta-base-{task}\"\nprint(MODEL)\ntokenizer = AutoTokenizer.from_pretrained(MODEL)\nconfig = AutoConfig.from_pretrained(MODEL)\nmodel = AutoModelForSequenceClassification.from_pretrained(MODEL)\n\n\nmodel.save_pretrained('../pretrained_models/'+MODEL)\ntokenizer.save_pretrained('../pretrained_models/'+MODEL)\nconfig.save_pretrained('../pretrained_models/'+MODEL)",
"cardiffnlp/twitter-roberta-base-sentiment\n"
],
[
"task='sentiment'\nMODEL = f\"cardiffnlp/twitter-roberta-base-{task}\"\n\nmodel = AutoModelForSequenceClassification.from_pretrained('../pretrained_models/'+MODEL)\ntokenizer = AutoTokenizer.from_pretrained('../pretrained_models/'+MODEL)\nconfig = AutoConfig.from_pretrained('../pretrained_models/'+MODEL)",
"_____no_output_____"
],
[
"# download label mapping\nlabels=[]\nmapping_link = f\"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt\"\nwith urllib.request.urlopen(mapping_link) as f:\n html = f.read().decode('utf-8').split(\"\\n\")\n csvreader = csv.reader(html, delimiter='\\t')\nlabels = [row[1] for row in csvreader if len(row) > 1]",
"_____no_output_____"
],
[
"nlp=pipeline(\"sentiment-analysis\", model=model, tokenizer=tokenizer, device=0, return_all_scores=True)",
"_____no_output_____"
],
[
"text = \"Good night 😊\"\ntext = preprocess(text)\n\nnlp(text, return_all_scores=True)",
"_____no_output_____"
]
],
[
[
"# ==== RECUP ML ====",
"_____no_output_____"
]
],
[
[
"!pip install nltk\n",
"Requirement already satisfied: nltk in /usr/local/lib/python3.8/dist-packages (3.6.2)\nRequirement already satisfied: regex in /usr/local/lib/python3.8/dist-packages (from nltk) (2020.11.13)\nRequirement already satisfied: click in /usr/local/lib/python3.8/dist-packages (from nltk) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.8/dist-packages (from nltk) (1.0.1)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from nltk) (4.56.2)\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\n"
],
[
"!pip install textblob\n",
"Collecting textblob\n Downloading textblob-0.15.3-py2.py3-none-any.whl (636 kB)\n\u001b[K |████████████████████████████████| 636 kB 17.7 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: nltk>=3.1 in /usr/local/lib/python3.8/dist-packages (from textblob) (3.6.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.8/dist-packages (from nltk>=3.1->textblob) (1.0.1)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from nltk>=3.1->textblob) (4.56.2)\nRequirement already satisfied: click in /usr/local/lib/python3.8/dist-packages (from nltk>=3.1->textblob) (7.1.2)\nRequirement already satisfied: regex in /usr/local/lib/python3.8/dist-packages (from nltk>=3.1->textblob) (2020.11.13)\nInstalling collected packages: textblob\nSuccessfully installed textblob-0.15.3\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\n"
],
[
"!pip install spacy",
"Requirement already satisfied: spacy in /usr/local/lib/python3.8/dist-packages (3.0.6)\nRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (20.9)\nRequirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.4.1)\nRequirement already satisfied: pydantic<1.8.0,>=1.7.1 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.7.4)\nRequirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.25.1)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.8/dist-packages (from spacy) (51.1.1)\nRequirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.8/dist-packages (from spacy) (0.8.2)\nRequirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.0.5)\nRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from spacy) (3.0.5)\nRequirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.20.1)\nRequirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.0.5)\nRequirement already satisfied: catalogue<2.1.0,>=2.0.3 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.0.4)\nRequirement already satisfied: spacy-legacy<3.1.0,>=3.0.4 in /usr/local/lib/python3.8/dist-packages (from spacy) (3.0.5)\nRequirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (4.56.2)\nRequirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (0.7.4)\nRequirement already satisfied: typer<0.4.0,>=0.3.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (0.3.2)\nRequirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.11.3)\nRequirement already satisfied: thinc<8.1.0,>=8.0.3 in /usr/local/lib/python3.8/dist-packages (from spacy) (8.0.3)\nRequirement already satisfied: pathy>=0.3.5 in /usr/local/lib/python3.8/dist-packages (from spacy) (0.5.2)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging>=20.0->spacy) (2.4.7)\nRequirement already satisfied: smart-open<4.0.0,>=2.2.0 in /usr/local/lib/python3.8/dist-packages (from pathy>=0.3.5->spacy) (3.0.0)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (1.26.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2020.12.5)\nRequirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (4.0.0)\nRequirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.8/dist-packages (from typer<0.4.0,>=0.3.0->spacy) (7.1.2)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.8/dist-packages (from jinja2->spacy) (1.1.1)\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\n"
],
[
"#Temps et fichiers\nimport os\nimport warnings\nimport time\nfrom datetime import timedelta\n\n#Manipulation de données\nimport pandas as pd\nimport numpy as np\n\n\n# Text\nimport nltk\nnltk.download('punkt')\nfrom nltk.tokenize import word_tokenize\nnltk.download('stopwords')\nfrom nltk.corpus import stopwords\nnltk.download('vader_lexicon')\nfrom nltk.sentiment.vader import SentimentIntensityAnalyzer\nfrom textblob import TextBlob\nimport string\nimport re\nimport spacy \n\n\n#Modélisation\nfrom sklearn.pipeline import Pipeline, FeatureUnion\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import TruncatedSVD\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.base import BaseEstimator, TransformerMixin\nfrom sklearn.svm import LinearSVC\nfrom sklearn.model_selection import RandomizedSearchCV# the keys can be accessed with final_pipeline.get_params().keys()\nfrom sklearn.linear_model import LogisticRegression\n\nfrom xgboost import XGBClassifier\n\n\n#Evaluation\nfrom sklearn.metrics import f1_score, confusion_matrix\n\n\n#Visualisation\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nimport plotly.express as px\n\n\n#Tracking d'expérience\nimport mlflow\nimport mlflow.sklearn",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n[nltk_data] Downloading package vader_lexicon to /root/nltk_data...\n[nltk_data] Package vader_lexicon is already up-to-date!\n"
]
],
[
[
"### Utilisation du code du projet packagé",
"_____no_output_____"
]
],
[
[
"#Cette cellule permet d'appeler la version packagée du projet et d'en assurer le reload avant appel des fonctions\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from dsa_sentiment.scripts.make_dataset import load_data",
"_____no_output_____"
],
[
"#from dsa_sentiment.scripts.evaluate import eval_metrics\nfrom dsa_sentiment.scripts.evaluate import eval_metrics",
"_____no_output_____"
],
[
"from dsa_sentiment.scripts.make_dataset import Preprocess_StrLower, Preprocess_transform_target",
"_____no_output_____"
]
],
[
[
"### Configuration de l'experiment MLFlow",
"_____no_output_____"
]
],
[
[
"mlflow.tracking.get_tracking_uri()",
"_____no_output_____"
],
[
"exp_name=\"DSA_sentiment_GPU\"\nmlflow.set_experiment(exp_name)",
"_____no_output_____"
]
],
[
[
"### Chargement des données",
"_____no_output_____"
]
],
[
[
"data_folder = os.path.join('..', 'data', 'raw')\nall_raw_files = [os.path.join(data_folder, fname)\n for fname in os.listdir(data_folder)]\nall_raw_files",
"_____no_output_____"
],
[
"random_state=42",
"_____no_output_____"
]
],
[
[
"Il n'est pas possible de faire de l'imputation comme avec des champs numérique. Il convient donc de supprimer les entrées vides",
"_____no_output_____"
]
],
[
[
"X_train, y_train, X_val, y_val = load_data(all_raw_files[2], split=True, test_size=0.3, random_state=random_state, dropNA=True)",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"y_train.head()",
"_____no_output_____"
],
[
"X_train.shape[0] + X_val.shape[0]",
"_____no_output_____"
],
[
"X_test, y_test = load_data(all_raw_files[1], split=False, random_state=random_state, dropNA=True)",
"_____no_output_____"
],
[
"X_test.head()",
"_____no_output_____"
],
[
"X_test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 3534 entries, 0 to 3533\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 textID 3534 non-null object\n 1 text 3534 non-null object\ndtypes: object(2)\nmemory usage: 55.3+ KB\n"
],
[
"y_test.head()",
"_____no_output_____"
],
[
"X_train = Preprocess_StrLower(X_train, columns_to_process=['text'])\nX_train.head()",
"_____no_output_____"
],
[
"y_train = Preprocess_transform_target(y_train, columns_to_process=['sentiment'])\ny_train.head()",
"_____no_output_____"
],
[
"y_val = Preprocess_transform_target(y_val, ['sentiment'])\ny_val.head()",
"_____no_output_____"
],
[
"X_test = Preprocess_StrLower(X_test, columns_to_process=['text'])\nX_test.head()",
"_____no_output_____"
],
[
"y_test = Preprocess_transform_target(y_test, ['sentiment'])\ny_test.head()",
"_____no_output_____"
]
],
[
[
"# === ESSAI HUGGING FACE ===",
"_____no_output_____"
]
],
[
[
"X_train.head()",
"_____no_output_____"
],
[
"# X_train_txt=X_train['text'].apply(lambda x : tokenizer(preprocess(x), return_tensors='pt'))",
"_____no_output_____"
],
[
"# X_train_txt.head()",
"_____no_output_____"
],
[
"# output = X_train_txt.apply(lambda x : model(**x))",
"_____no_output_____"
],
[
"# output.describe()",
"_____no_output_____"
],
[
"def TorchTwitterRoBERTa_Pred(text = \"Good night 😊\"):\n text = preprocess(text)\n otpt = nlp(text)[0]\n# otpt = (list(otpt[i].values())[1] for i in range(len(otpt)))\n neg = otpt[0]['score']\n neu = otpt[1]['score']\n pos = otpt[2]['score']\n \n# NewName = {0:'roBERTa-neg', 1:'roBERTa-neu', 2:'roBERTa-pos'}\n# otpt = pd.json_normalize(otpt).transpose().rename(columns=NewName).reset_index().drop([0]).drop(columns=['index'])\n return neg, neu, pos",
"_____no_output_____"
],
[
"test = TorchTwitterRoBERTa_Pred()\ntest",
"_____no_output_____"
],
[
"df=X_train.head()[['text']]\ndf",
"_____no_output_____"
],
[
"#df.apply(TorchTwitterRoBERTa_Pred, axis=1, result_type='expand')\ndf['roBERTa_neg'],df['roBERTa_neu'],df['roBERTa_pos'] = zip(*df['text'].map(TorchTwitterRoBERTa_Pred))\ndf",
"_____no_output_____"
],
[
"def run_loopy_roBERTa(df):\n v_neg, v_neu, v_pos = [], [], []\n for _, row in df.iterrows():\n v1, v2, v3 = TorchTwitterRoBERTa_Pred(row.values[0])\n v_neg.append(v1)\n v_neu.append(v2)\n v_pos.append(v3)\n df_result = pd.DataFrame({'roBERTa_neg': v_neg,\n 'roBERTa_neu': v_neu,\n 'roBERTa_pos': v_pos})\n return df_result",
"_____no_output_____"
],
[
"run_loopy_roBERTa(X_train.head()[['text']])",
"_____no_output_____"
],
[
"y_train.head()",
"_____no_output_____"
],
[
"TorchTwitterRoBERTa_Pred(X_test['text'][1])",
"_____no_output_____"
],
[
"TorchTwitterRoBERTa_Pred()",
"_____no_output_____"
],
[
"class clTwitterroBERTa(BaseEstimator, TransformerMixin):\n def __init__(self, field):\n self.field = field\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n res = run_loopy_roBERTa(X[[self.field]])\n \n #self.res[['roBERTa_neg', 'roBERTa_neu', 'roBERTa_pos']] = X[self.field].apply(lambda x : TorchTwitterRoBERTa_Pred(x)).apply(pd.Series)\n return res\n #return self.res",
"_____no_output_____"
],
[
"roBERTa_pipe=Pipeline([\n ('roBERTa', clTwitterroBERTa(field='text'))\n ])",
"_____no_output_____"
],
[
"essai=roBERTa_pipe.transform(X_train.head())\nessai.head()",
"_____no_output_____"
],
[
"essai=roBERTa_pipe.transform(X_train)",
"_____no_output_____"
],
[
"\ntorch.cuda.is_available()",
"_____no_output_____"
],
[
"essai.tail()",
"_____no_output_____"
],
[
"for var in ['roBERTa_neg', 'roBERTa_neu', 'roBERTa_pos']:\n plt.figure(figsize=(12,4))\n sns.distplot(essai[(y_train==1)['sentiment']][var], bins=30, kde=False, \n color='green', label='Positive')\n sns.distplot(essai[(y_train==-1)['sentiment']][var], bins=30, kde=False, \n color='red', label='Negative')\n sns.distplot(essai[(y_train==0)['sentiment']][var], bins=30, kde=False, \n color='blue', label='Neutral')\n plt.legend()\n plt.title(f'Histogram of {var} by true sentiment');",
"/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/usr/local/lib/python3.8/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n"
]
],
[
[
"## 1) Model",
"_____no_output_____"
],
[
"On commence par définir une fonction générique qui sera en capacité d'ajuster, optimiser et logger dans MLFlow les résultats de pipelines qui seront produits pour chaque essai",
"_____no_output_____"
]
],
[
[
"def trainPipelineMlFlow(xpName, pipeline, X_train, y_train, X_test, y_test, fixed_params={}, opti=False, iterable_params={}):\n \"\"\"\n Fonction générique permettant d'entrainer et d'optimiser un pipeline sklearn\n Les paramètres et résultats sont stockés dans MLFlow\n \"\"\"\n \n with mlflow.start_run(run_name=xpName):\n \n start_time = time.monotonic() \n \n warnings.filterwarnings(\"ignore\")\n \n # fit pipeline\n pipeline.set_params(**fixed_params)\n if not opti:\n search = pipeline\n else:\n search = RandomizedSearchCV(pipeline, iterable_params)\n search.fit(X_train, y_train)\n \n # get params\n # params_to_log = pipeline.get_params() #select initial params PB : can lead to greater than 250 charac limit\n params_to_log = fixed_params #select initial params\n if opti:\n params_to_log.update(search.best_params_) #update for optimal solution\n mlflow.log_params(params_to_log)\n \n \n # Evaluate metrics\n y_pred=search.predict(X_test)\n (f1_test, cr_test) = eval_metrics(y_test, y_pred)\n \n # Print out metrics\n print(xpName)\n print(\" f1_test: %s\" % f1_test)\n print(\" CR_test: %s\" % cr_test)\n\n mlflow.log_metrics({\"f1_test\": f1_test})\n mlflow.sklearn.log_model(pipeline, xpName)\n \n end_time = time.monotonic()\n elapsed_time = timedelta(seconds=end_time - start_time)\n print('elapsed time :', elapsed_time)\n mlflow.set_tag(key=\"elapsed_time\", value=elapsed_time) \n ",
"_____no_output_____"
],
[
"def random_state_params(pipe, seed):\n \"\"\"Crée un dictionnaire constitué de tous les paramètres 'random_state' d'un pipe et leur assigne une valeur unique\"\"\"\n rs = re.findall(r\"[a-zA-Z\\_]+_random_state\", ' '.join(list(pipe.get_params().keys())))\n rs=dict.fromkeys(rs, seed)\n return rs",
"_____no_output_____"
]
],
[
[
"La cellule suivante permet de créer des étapes de sélection de colonnes dans les Data Frame en entrée",
"_____no_output_____"
]
],
[
[
"from sklearn.base import BaseEstimator, TransformerMixin\n\nclass TextSelector(BaseEstimator, TransformerMixin):\n def __init__(self, field):\n self.field = field\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.field]\n\nclass NumberSelector(BaseEstimator, TransformerMixin):\n def __init__(self, field):\n self.field = field\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[[self.field]]",
"_____no_output_____"
],
[
"def Tokenizer(str_input):\n words = re.sub(r\"[^A-Za-z0-9\\-]\", \" \", str_input).lower().split()\n porter_stemmer=nltk.PorterStemmer()\n words = [porter_stemmer.stem(word) for word in words]\n return words",
"_____no_output_____"
],
[
"# Create function so that we could reuse later\ndef plot_cm(y_test, y_pred, target_names=[-1, 0, 1], \n figsize=(5,3)):\n \"\"\"Create a labelled confusion matrix plot.\"\"\"\n cm = confusion_matrix(y_test, y_pred)\n fig, ax = plt.subplots(figsize=figsize)\n sns.heatmap(cm, annot=True, fmt='g', cmap='BuGn', cbar=False, \n ax=ax)\n ax.set_title('Confusion matrix')\n ax.set_xlabel('Predicted')\n ax.set_xticklabels(target_names)\n ax.set_ylabel('Actual')\n ax.set_yticklabels(target_names, \n fontdict={'verticalalignment': 'center'});",
"_____no_output_____"
]
],
[
[
"## roBERTa RF",
"_____no_output_____"
]
],
[
[
"roBERTa_RF_pipeline = Pipeline(\n steps=[\n ('roBERTa', clTwitterroBERTa(field='text')),\n (\"classifier\", RandomForestClassifier(n_jobs=-1))\n ]\n)\n",
"_____no_output_____"
],
[
"roBERTa_RF_Pipe = Pipeline(\n steps=[\n ('roBERTa', roBERTa_pipe),\n (\"classifier\", RandomForestClassifier(n_jobs=-1))\n ]\n)\n",
"_____no_output_____"
],
[
"random_state_params(roBERTa_RF_Pipe, random_state)",
"_____no_output_____"
],
[
"trainPipelineMlFlow(xpName=\"roBERTa - RF\", \n pipeline=roBERTa_RF_Pipe, \n X_train=X_train, y_train=y_train, X_test=X_test, y_test=y_test, \n fixed_params=random_state_params(roBERTa_RF_Pipe, random_state))",
"roBERTa - RF\n f1_test: 0.7047697319571444\n CR_test: precision recall f1-score support\n\n -1 0.69 0.72 0.70 1001\n 0 0.66 0.66 0.66 1430\n 1 0.77 0.74 0.76 1103\n\n accuracy 0.70 3534\n macro avg 0.71 0.70 0.70 3534\nweighted avg 0.70 0.70 0.70 3534\n\nelapsed time : 0:02:20.096396\n"
],
[
"y_train_pred_roBERTa_RF = roBERTa_RF_Pipe.predict(X_train)\ny_test_pred_roBERTa_RF = roBERTa_RF_Pipe.predict(X_test)",
"_____no_output_____"
],
[
"X_train_roBERTa= roBERTa_pipe.transform(X_train)",
"_____no_output_____"
],
[
"X_test_roBERTa= roBERTa_pipe.transform(X_test)",
"_____no_output_____"
],
[
"X_train_roBERTa.to_pickle('/mnt/data/interim/X_train_roBERTa.plk')\nX_test_roBERTa.to_pickle('/mnt/data/interim/X_test_roBERTa.plk')",
"_____no_output_____"
],
[
"X_test_roBERTa.head(10)",
"_____no_output_____"
],
[
"y_test.head()",
"_____no_output_____"
],
[
"X_test_roBERTa.to_pickle('/mnt/data/interim/X_test_roBERTa.plk')",
"_____no_output_____"
],
[
"pd.DataFrame(y_train_pred_roBERTa_RF).to_pickle('/mnt/data/interim/y_train_pred_roBERTa_RF.plk')\npd.DataFrame(y_test_pred_roBERTa_RF).to_pickle('/mnt/data/interim/y_test_pred_roBERTa_RF.plk')\n",
"_____no_output_____"
],
[
"X_train_roBERTa[0:10]",
"_____no_output_____"
],
[
"y_train_score_roBERTa_RF=pd.DataFrame(roBERTa_RF_pipeline['classifier'].predict_proba(X_train_roBERTa_RF), columns=['roBERTa_neg', 'roBERTa_neu', 'roBERTa_pos'])",
"_____no_output_____"
],
[
"y_train_score_roBERTa_RF.tail()",
"_____no_output_____"
],
[
"y_test_score_roBERTa_RF=pd.DataFrame(roBERTa_RF_pipeline['classifier'].predict_proba(X_test_roBERTa), columns=['roBERTa_neg', 'roBERTa_neu', 'roBERTa_pos'])",
"_____no_output_____"
],
[
"y_train_score_roBERTa_RF.to_pickle('/mnt/data/interim/y_train_score_roBERTa_RF.plk')\n",
"_____no_output_____"
],
[
"y_test_score_roBERTa_RF.to_pickle('/mnt/data/interim/y_test_score_roBERTa_RF.plk')",
"_____no_output_____"
],
[
"y_test_pred_roBERTa_RF=pd.DataFrame(roBERTa_RF_pipeline['classifier'].predict(X_test_roBERTa))",
"_____no_output_____"
],
[
"y_test_pred_roBERTa_RF.head()",
"_____no_output_____"
],
[
" y_test_pred_roBERTa_RF[0]",
"_____no_output_____"
],
[
"y_test.head()",
"_____no_output_____"
],
[
"plot_cm(y_test['sentiment'], y_test_pred_roBERTa_RF[0])",
"_____no_output_____"
],
[
"(f1_test, cr_test) = eval_metrics(y_test['sentiment'], y_test_pred_roBERTa_RF[0])",
"_____no_output_____"
],
[
"f1_test",
"_____no_output_____"
],
[
"print(cr_test)",
" precision recall f1-score support\n\n -1 0.68 0.72 0.70 1001\n 0 0.66 0.66 0.66 1430\n 1 0.77 0.74 0.76 1103\n\n accuracy 0.70 3534\n macro avg 0.71 0.71 0.71 3534\nweighted avg 0.70 0.70 0.70 3534\n\n"
],
[
"from sklearn.metrics import roc_curve, auc\nfrom sklearn.datasets import make_classification",
"_____no_output_____"
],
[
"X_test.head()",
"_____no_output_____"
],
[
"y_test.head()",
"_____no_output_____"
],
[
"TorchTwitterRoBERTa_Pred(X_test['text'][1])",
"_____no_output_____"
],
[
"X_test_roBERTa_RF.head()",
"_____no_output_____"
],
[
"test= X_test['text'][0:5].apply(lambda x : TorchTwitterRoBERTa_Pred(x))\ntest",
"_____no_output_____"
],
[
"roBERTa_pipe.transform(X_test.head())",
"_____no_output_____"
],
[
"roBERTa_RF_Pipe['roBERTa'].transform(X_test.head())",
"_____no_output_____"
],
[
"roBERTa_RF_pipeline['roBERTa'].transform(X_test.head())",
"_____no_output_____"
],
[
"y_test_score_roBERTa_RF.head()",
"_____no_output_____"
],
[
"pd.DataFrame(roBERTa_RF_pipeline['classifier'].predict(X_test_roBERTa_RF)).head()",
"_____no_output_____"
],
[
"X_test.head()",
"_____no_output_____"
],
[
"roBERTa_RF_pipeline['roBERTa'].transform(X_test.head())",
"_____no_output_____"
],
[
"def multiclass_roc_auc_score(y_test, y_pred, average=\"macro\"):\n lb = LabelBinarizer()\n lb.fit(y_test)\n y_test = lb.transform(y_test)\n y_pred = lb.transform(y_pred)\n return roc_auc_score(y_test, y_pred, average=average)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import label_binarize\nfrom scipy import interp\nfrom itertools import cycle",
"_____no_output_____"
],
[
"y_test_bi = label_binarize(y_test['sentiment'], classes=[-1, 0, 1])\nn_classes = y_test_bi.shape[1]",
"_____no_output_____"
],
[
"y_test.head()",
"_____no_output_____"
],
[
"y_test_bi[0:5]",
"_____no_output_____"
],
[
"# Compute ROC curve and ROC area for each class\nfpr = dict()\ntpr = dict()\nroc_auc = dict()\nfor i in range(n_classes):\n fpr[i], tpr[i], _ = roc_curve(y_test_bi[:, i], y_test_score_roBERTa_RF[['roBERTa_neg', 'roBERTa_neu', 'roBERTa_pos'][i]])\n roc_auc[i] = auc(fpr[i], tpr[i])",
"_____no_output_____"
],
[
"y_test_score_roBERTa_RF.to_numpy().shape",
"_____no_output_____"
],
[
"y_test_score_roBERTa_RF.to_numpy().ravel().shape",
"_____no_output_____"
],
[
"y_test.to_numpy().ravel().shape",
"_____no_output_____"
],
[
"test=pd.DataFrame(roBERTa_RF_pipeline['classifier'].predict_proba(X_test_roBERTa_RF), columns=['roBERTa_neg', 'roBERTa_neu', 'roBERTa_pos'])",
"_____no_output_____"
],
[
"test.head()",
"_____no_output_____"
],
[
"X_test_roBERTa_RF.head()",
"_____no_output_____"
],
[
"\n# Compute micro-average ROC curve and ROC area\nfpr[\"micro\"], tpr[\"micro\"], _ = roc_curve(np.where(abs(y_test_bi.ravel())>0.5,1,0), y_test_score_roBERTa_RF.to_numpy().ravel())\nroc_auc[\"micro\"] = auc(fpr[\"micro\"], tpr[\"micro\"])",
"_____no_output_____"
],
[
"# First aggregate all false positive rates\nall_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))\n\n# Then interpolate all ROC curves at this points\nmean_tpr = np.zeros_like(all_fpr)\nfor i in range(n_classes):\n mean_tpr += interp(all_fpr, fpr[i], tpr[i])\n\n# Finally average it and compute AUC\nmean_tpr /= n_classes\n\nfpr[\"macro\"] = all_fpr\ntpr[\"macro\"] = mean_tpr\nroc_auc[\"macro\"] = auc(fpr[\"macro\"], tpr[\"macro\"])\n\n\n# Plot all ROC curves\nplt.figure()\nlw = 2\n\nplt.plot(fpr[\"micro\"], tpr[\"micro\"],\n label='micro-average ROC curve (area = {0:0.2f})'\n ''.format(roc_auc[\"micro\"]),\n color='deeppink', linestyle=':', linewidth=4)\n\nplt.plot(fpr[\"macro\"], tpr[\"macro\"],\n label='macro-average ROC curve (area = {0:0.2f})'\n ''.format(roc_auc[\"macro\"]),\n color='navy', linestyle=':', linewidth=4)\n\ncolors = cycle(['aqua', 'darkorange', 'cornflowerblue'])\nfor i, color in zip(range(n_classes), colors):\n plt.plot(fpr[i], tpr[i], color=color, lw=lw,\n label='ROC curve of class {0} (area = {1:0.2f})'\n ''.format(i, roc_auc[i]))\n\n\nplt.plot([0, 1], [0, 1], 'k--', lw=lw)\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Some extension of Receiver operating characteristic to multi-class')\nplt.legend(loc=\"lower right\")\nplt.show()\n\n",
"_____no_output_____"
],
[
"!pip install yellowbrick",
"Collecting yellowbrick\n Downloading yellowbrick-1.3.post1-py3-none-any.whl (271 kB)\n\u001b[K |████████████████████████████████| 271 kB 7.5 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: matplotlib!=3.0.0,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from yellowbrick) (3.3.3)\nRequirement already satisfied: scikit-learn>=0.20 in /usr/local/lib/python3.8/dist-packages (from yellowbrick) (0.24.1)\nRequirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from yellowbrick) (1.6.0)\nRequirement already satisfied: cycler>=0.10.0 in /usr/local/lib/python3.8/dist-packages (from yellowbrick) (0.10.0)\nRequirement already satisfied: numpy<1.20,>=1.16.0 in /usr/local/lib/python3.8/dist-packages (from yellowbrick) (1.19.5)\nRequirement already satisfied: six in /usr/local/lib/python3.8/dist-packages (from cycler>=0.10.0->yellowbrick) (1.15.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.8/dist-packages (from matplotlib!=3.0.0,>=2.0.2->yellowbrick) (2.8.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.8/dist-packages (from matplotlib!=3.0.0,>=2.0.2->yellowbrick) (1.3.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /usr/local/lib/python3.8/dist-packages (from matplotlib!=3.0.0,>=2.0.2->yellowbrick) (2.4.7)\nRequirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.8/dist-packages (from matplotlib!=3.0.0,>=2.0.2->yellowbrick) (8.1.0)\nRequirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn>=0.20->yellowbrick) (2.1.0)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.8/dist-packages (from scikit-learn>=0.20->yellowbrick) (1.0.1)\nInstalling collected packages: yellowbrick\nSuccessfully installed yellowbrick-1.3.post1\n\u001b[33mWARNING: You are using pip version 20.3.3; however, version 21.1.1 is available.\nYou should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.\u001b[0m\n"
],
[
"from yellowbrick.classifier import ROCAUC\n\nvisualizer = ROCAUC(model, classes=[\"win\", \"loss\", \"draw\"])\n",
"_____no_output_____"
],
[
"fpr, tpr, thresholds = roc_curve(y_test['sentiment'], y_test_score_roBERTa_RF)",
"_____no_output_____"
]
],
[
[
"# fin",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e731cf81e8a5af52313a4dd46390c73449c6f007 | 28,846 | ipynb | Jupyter Notebook | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/18_EX_VARFORM1D.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/18_EX_VARFORM1D.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/18_EX_VARFORM1D.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 67.872941 | 13,264 | 0.685468 | [
[
[
"from varform1D import *\nimport sympy as sym\nimport numpy as np\n\ndef phi_factory(name, N, highest_derivative=2):\n \"\"\"\n Generate N+1 basis functions (phi) on [0,1] which vanish\n on the boundary. Differentiate the functions up to\n (and including) highest_derivative.\n \"\"\"\n x = sym.Symbol('x')\n from sympy import sin, cos, pi\n if name == 'sines':\n phi = {0: [sin(pi*(i+1)*x) for i in range(N+1)]}\n elif name == 'poly':\n phi = {0: [x**(i+1)*(1-x) for i in range(N+1)]}\n elif name == 'poly2':\n phi = {0: [x**(i+2)*(1-x) for i in range(N+1)]}\n elif name == 'poly3':\n phi = {0: [(1-x)**(i+1) for i in range(N+1)]}\n elif name == 'Lagrange':\n from Lagrange import Lagrange_polynomials\n # Return all Lagrange polynomials and strip off\n # boundary polynomials in application code (if necessary)\n phi = {0: Lagrange_polynomials(x, N, [0,1], 'uniform')[0]}\n elif name == 'Lagrange_Cheb':\n from Lagrange import Lagrange_polynomials\n phi = {0: Lagrange_polynomials(x, N, [0,1], 'Chebyshev')}\n\n # Compute derivatives of the basis functions\n for d in range(1, highest_derivative+1):\n phi[d] = [sym.diff(phi[0][i], x, d) for i in range(len(phi[0]))]\n return phi\n\n\ndef case0(f, N=3):\n B = 1 - x**3\n dBdx = sym.diff(B, x)\n\n # Compute basis functions and their derivatives\n phi = {0: [x**(i+1)*(1-x) for i in range(N+1)]}\n phi[1] = [sym.diff(phi_i, x) for phi_i in phi[0]]\n\n def integrand_lhs(phi, i, j):\n return phi[1][i]*phi[1][j]\n\n def integrand_rhs(phi, i):\n return f*phi[0][i] - dBdx*phi[1][i]\n\n Omega = [0, 1]\n\n u_bar, _ = solver(integrand_lhs, integrand_rhs, phi, Omega,\n verbose=True, symbolic=True)\n u = B + u_bar\n print(('solution u:', sym.simplify(sym.expand(u))))\n\n # Calculate analytical solution\n\n # Solve -u''=f by integrating f twice\n f1 = sym.integrate(f, x)\n f2 = sym.integrate(f1, x)\n # Add integration constants\n C1, C2 = sym.symbols('C1 C2')\n u_e = -f2 + C1*x + C2\n # Find C1 and C2 from the boundary conditions u(0)=0, u(1)=1\n s = sym.solvers.solve([u_e.subs(x,0) - 1, u_e.subs(x,1) - 0], [C1, C2])\n # Form the exact solution\n u_e = -f2 + s[C1]*x + s[C2]\n print(('analytical solution:', u_e))\n #print 'error:', u - u_e # many terms - which cancel\n print(('error:', sym.expand(u - u_e)))\n\n\ndef case1(N, basis='sines'):\n \"\"\"\n Solve -u''=f(x) on [0,1], u(0)=u(1)=0. f(x)=2.\n Method: least-squares, Galerkin, collocation.\n \"\"\"\n\n f = 2\n D = 0; L = 1\n\n def integrand_lhs_LS(phi, i, j):\n return -phi[2][i]*phi[2][j]\n\n def integrand_rhs_LS(phi, i):\n return f*phi[2][i]\n\n def integrand_lhs_G2(phi, i, j):\n return -phi[0][i]*phi[2][j]\n\n def integrand_rhs_G2(phi, i):\n return f*phi[0][i]\n\n def integrand_lhs_G1(phi, i, j):\n return phi[1][i]*phi[1][j]\n\n def integrand_rhs_G1(phi, i):\n return f*phi[0][i]\n\n def term_lhs_co(phi, x, i, j):\n return -phi[2][j](x[i])\n\n def term_rhs_co(phi, x, i):\n return 2\n\n Omega = [0, 1]\n phi = phi_factory(basis, N, 2)\n u = {} # dict of solutions corresponding to different methods\n u['LS'] = solver(integrand_lhs_LS, integrand_rhs_LS, phi, Omega)[0]\n u['G1'] = solver(integrand_lhs_G1, integrand_rhs_G1, phi, Omega)[0]\n u['G2'] = solver(integrand_lhs_G2, integrand_rhs_G2, phi, Omega)[0]\n u['exact'] = x*(D + L - x)\n # Test different collocation points\n points = []\n if N == 0:\n points.extend([sym.Rational(1,2), 0.1, 0.9])\n else:\n points.append(np.linspace(0.1, 0.9, N+1)) # uniformly distributed\n for seed in 2, 10:\n np.random.seed(seed)\n points.append(np.random.uniform(0, 1, size=N+1))\n for k, p in enumerate(points):\n u['co'+str(k)] = collocation(term_lhs_co, term_rhs_co, phi, p)\n import pprint; pprint.pprint(u)\n comparison_plot(u, [0, 1])\n\ndef case2(N):\n \"\"\"\n Solve -u''=f(x) on [0,1], u'(0)=C, u(1)=D.\n Method: Galerkin only.\n \"\"\"\n x = sym.Symbol('x')\n f = 2\n D = 2; E = 3;\n L = 1 # basis function factory restricted to [0,1]\n D = sym.Symbol('D')\n C = sym.Symbol('C')\n\n # u exact\n f1 = sym.integrate(f, x)\n f2 = sym.integrate(f1, x)\n C1, C2 = sym.symbols('C1 C2')\n u = -f2 + C1*x + C2\n BC1 = sym.diff(u,x).subs(x, 0) - C\n BC2 = u.subs(x,1) - D\n s = sym.solve([BC1, BC2], [C1, C2])\n u_e = -f2 + s[C1]*x + s[C2]\n\n def diff_eq(u, x):\n eqs = {'diff': -sym.diff(u, x, x) - f,\n 'BC1': sym.diff(u, x).subs(x, 0) - C,\n 'BC2': u.subs(x, L) - D}\n for eq in eqs:\n eqs[eq] = sym.simplify(eqs[eq])\n\n print(('Check of exact solution:', diff_eq(u_e, x)))\n\n def integrand_lhs(phi, i, j):\n return phi[1][i]*phi[1][j]\n\n B = D*x/L\n dBdx = sym.diff(B, x)\n\n def integrand_rhs(phi, i):\n return f*phi[0][i] - dBdx*phi[1][i]\n\n boundary_lhs = None # not used here\n\n def boundary_rhs(phi, i):\n return -C*phi[0][i].subs(x, 0)\n\n Omega = [0, L]\n phi = phi_factory('poly3', N, 1)\n #phi = phi_factory('Lagrange', N, 1)\n print((phi[0]))\n u = {'G1': solver(integrand_lhs, integrand_rhs, phi, Omega,\n boundary_lhs, boundary_rhs, verbose=True)[0] + B,\n 'exact': u_e}\n print(('numerical solution:', u['G1']))\n print(('simplified:', sym.simplify(u['G1'])))\n print(('u exact', u['exact']))\n # Change from symblic to numerical computing for plotting.\n # That is, replace C and D symbols by numbers\n # (comparison_plot takes the expressions with x to functions of x\n # so C and D must have numbers).\n for name in u:\n u[name] = u[name].subs(C, 2).subs(D, -2)\n print(('u:', u))\n C = 2; D = -2 # Note that these are also remembered by u_e\n comparison_plot(u, [0, 1])\n\ndef case3(N, a=1, a_symbols={}, f=0, f_symbols={},\n basis='poly', symbolic=True, B_type='linear'):\n \"\"\"\n Solve -(a(x)u)'=0 on [0,1], u(0)=1, u(1)=0.\n Method: Galerkin.\n a and f must be sympy expressions with x as the only symbol\n (other symbols in a gives very long symbolic expressions in\n the solution and is of little value).\n \"\"\"\n # Note: a(x) with symbols\n #f = sym.Rational(10,7) # for a=1, corresponds to f=0 when a=1/(2+10x)\n\n \"\"\"\n def a(x): # very slow\n return sym.Piecewise((a1, x < sym.Rational(1,2)),\n (a2, x >= sym.Rational(1,2)))\n\n def a(x): # cannot be treated by sympy or wolframalpha.com\n return 1./(a1 + a2*x)\n\n # Symbolic a(x) makes large expressions...\n def a(x):\n return 2 + b*x\n def a(x):\n return 1/(2 + b*x)\n\n def a(x):\n return 1/(2 + 10*x)\n\n b = sym.Symbol('b')\n def a(x):\n return sym.exp(b*x)\n \"\"\"\n if f == 0:\n h = sym.integrate(1/a, x)\n h1 = h.subs(x, 1)\n h0 = h.subs(x, 0)\n u_exact = 1 - (h-h0)/(h1-h0)\n else:\n # Assume a=1\n f1 = sym.integrate(f, x)\n f2 = sym.integrate(f1, x)\n C1, C2 = sym.symbols('C1 C2')\n u = -f2 + C1*x + C2\n BC1 = u.subs(x,0) - 1\n BC2 = u.subs(x,1) - 0\n s = sym.solve([BC1, BC2], [C1, C2])\n u_exact = -f2 + s[C1]*x + s[C2]\n print(('u_exact:', u_exact))\n\n def integrand_lhs(phi, i, j):\n return a*phi[1][i]*phi[1][j]\n\n def integrand_rhs(phi, i):\n return f*phi[0][i] - a*dBdx*phi[1][i]\n\n boundary_lhs = boundary_rhs = None # not used here\n\n Omega = [0, 1]\n if B_type == 'linear':\n B = 1 - x\n elif B_type == 'cubic':\n B = 1 - x**3\n elif B_type == 'sqrt':\n B = 1 - sym.sqrt(x)\n else:\n B = 1 - x\n if basis == 'poly':\n phi = phi_factory('poly', N, 1)\n elif basis == 'Lagrange':\n phi = phi_factory('Lagrange', N, 1)\n print(('len phi:', len(phi)))\n B = phi[0][0]*1 + phi[0][-1]*0\n phi[0] = phi[0][1:-1]\n phi[1] = phi[1][1:-1]\n elif basis == 'sines':\n phi = phi_factory('sines', N, 1)\n else:\n raise ValueError('basis=%s must be poly, Lagrange or sines' % basis)\n print(('Basis functions:', phi[0]))\n\n dBdx = sym.diff(B, x)\n\n verbose = True if symbolic else False\n phi_sum, _ = solver(integrand_lhs, integrand_rhs, phi, Omega,\n boundary_lhs, boundary_rhs, verbose=verbose,\n symbolic=symbolic)\n print(('sum c_j*phi_j:', phi_sum))\n name = 'numerical, N=%d' % N\n u = {name: phi_sum + B, 'exact': sym.simplify(u_exact)}\n print(('Numerical solution:', u[name]))\n if verbose:\n print(('...simplified to', sym.simplify(u[name])))\n print(('...exact solution:', sym.simplify(u['exact'])))\n\n f_str = str(f).replace(' ', '')\n a_str = str(a).replace(' ', '')\n filename = 'DaDu=-%s_a=%s_N%s_%s.eps' % (f_str, a_str, N, basis)\n # Change from symblic to numerical computing for plotting.\n all_symbols = {}\n all_symbols.update(a_symbols)\n all_symbols.update(f_symbols)\n if all_symbols:\n for s in all_symbols:\n value = all_symbols[s]\n print(('symbol', s, 'gets value', value))\n u[name] = u[name].subs(s, value)\n u['exact'] = u['exact'].subs(s, value)\n print(('Numerical u_exact formula before plot:', u_exact))\n comparison_plot(u, [0, 1], filename)\n\n\ndef comparison_plot(u, Omega, filename='tmp.eps'):\n \"\"\"\n Plot the solution u(x) (a sympy expression with x as the only\n symbol - all other symbols must have been substituted by numbers)\n and the exact solution u_e (which is a Python function that can\n take an array x and return the values of the exact solution).\n Omega is a 2-tuple/list with the domain's lower and upper limit.\n \"\"\"\n x = sym.Symbol('x')\n resolution = 401\n xcoor = np.linspace(Omega[0], Omega[1], resolution)\n for name in u:\n u[name] = sym.lambdify([x], u[name], modules=\"numpy\")\n u[name] = u[name](xcoor)\n legends = []\n for name in u:\n plt.plot(xcoor, u[name])\n legends.append(name)\n plt.legend(legends)\n plt.savefig(filename)\n\nx, b = sym.symbols('x b')\n\n#case1(8, 'sines')\n#case2(1)\n#case3(4)\n#case3(N=3, a=sym.exp(b*x), f=0, basis='poly',\n# a_symbols={b: 8}, f_symbols={})\n#case3(N=2, a=1, f=b, basis='poly', f_symbols={b: 10}, B_type='cubic')\n\n#case0(f=b, N=1)\n#case0(f=x**6, N=7)\ncase2(1)\n\n\n",
"('Check of exact solution:', None)\n[1 - x, (1 - x)**2]\n...evaluating matrix... (0,0): 1\n(0,1): 2 - 2*x\nrhs: D - 2*x + 2\n(1,1): (2*x - 2)**2\nrhs: -D*(2*x - 2) + 2*(1 - x)**2\n\nA:\n Matrix([[1, 1], [1, 4/3]]) \nb:\n Matrix([[-C + D + 1], [-C + D + 2/3]])\ncoeff: [-C + D + 2, -1]\napproximation: -(1 - x)**2 + (1 - x)*(-C + D + 2)\n('numerical solution:', D*x - (1 - x)**2 + (1 - x)*(-C + D + 2))\n('simplified:', C*x - C + D - x**2 + 1)\n('u exact', C*x - C + D - x**2 + 1)\n('u:', {'G1': -(1 - x)**2 - 2, 'exact': -x**2 + 2*x - 3})\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e731f3cd0b5be9a3ad199dd682fff864649b14fb | 8,650 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Single Oscillator-checkpoint.ipynb | jorgehatccrma/pygrfnn | c67cb30c5cde579796ccbacc6338eb0631e81f6e | [
"BSD-3-Clause"
] | 7 | 2015-10-01T12:54:11.000Z | 2018-09-27T04:10:49.000Z | notebooks/.ipynb_checkpoints/Single Oscillator-checkpoint.ipynb | jorgehatccrma/pygrfnn | c67cb30c5cde579796ccbacc6338eb0631e81f6e | [
"BSD-3-Clause"
] | null | null | null | notebooks/.ipynb_checkpoints/Single Oscillator-checkpoint.ipynb | jorgehatccrma/pygrfnn | c67cb30c5cde579796ccbacc6338eb0631e81f6e | [
"BSD-3-Clause"
] | 3 | 2015-10-01T12:54:14.000Z | 2018-11-15T13:35:21.000Z | 59.246575 | 1,523 | 0.575838 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7320363596447f33629fe9bf90bc1317c4dacc2 | 120,346 | ipynb | Jupyter Notebook | wk01/Untitled.ipynb | nouyang/janson299r | 2407f11a94d496d5bec044d3e007661d76b9cff3 | [
"MIT"
] | null | null | null | wk01/Untitled.ipynb | nouyang/janson299r | 2407f11a94d496d5bec044d3e007661d76b9cff3 | [
"MIT"
] | null | null | null | wk01/Untitled.ipynb | nouyang/janson299r | 2407f11a94d496d5bec044d3e007661d76b9cff3 | [
"MIT"
] | null | null | null | 776.425806 | 54,406 | 0.942549 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7320bd843a9963c5f656ee28895cef2c4e0e562 | 35,588 | ipynb | Jupyter Notebook | doc/Tutorials/Columnar_Data.ipynb | stuarteberg/holoviews | 65136173014124b41cee00f5a0fee82acdc78f7f | [
"BSD-3-Clause"
] | 1 | 2019-01-02T20:20:09.000Z | 2019-01-02T20:20:09.000Z | doc/Tutorials/Columnar_Data.ipynb | stuarteberg/holoviews | 65136173014124b41cee00f5a0fee82acdc78f7f | [
"BSD-3-Clause"
] | null | null | null | doc/Tutorials/Columnar_Data.ipynb | stuarteberg/holoviews | 65136173014124b41cee00f5a0fee82acdc78f7f | [
"BSD-3-Clause"
] | 1 | 2021-10-31T05:26:08.000Z | 2021-10-31T05:26:08.000Z | 39.410853 | 965 | 0.644234 | [
[
[
"In this Tutorial we will explore how to work with columnar data in HoloViews. Columnar data has a fixed list of column headings, with values stored in an arbitrarily long list of rows. Spreadsheets, relational databases, CSV files, and many other typical data sources fit naturally into this format. HoloViews defines an extensible system of interfaces to load, manipulate, and visualize this kind of data, as well as allowing conversion of any of the non-columnar data types into columnar data for analysis or data interchange.\n\nBy default HoloViews will use one of three storage formats for columnar data:\n\n* A pure Python dictionary containing each column.\n* A purely NumPy-based format for numeric data.\n* Pandas DataFrames",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport holoviews as hv\nfrom IPython.display import HTML\nhv.notebook_extension()",
"_____no_output_____"
]
],
[
[
"# Simple Dataset",
"_____no_output_____"
],
[
"Usually when working with data we have one or more independent variables, taking the form of categories, labels, discrete sample coordinates, or bins. These variables are what we refer to as key dimensions (or ``kdims`` for short) in HoloViews. The observer or dependent variables, on the other hand, are referred to as value dimensions (``vdims``), and are ordinarily measured or calculated given the independent variables. The simplest useful form of a Dataset object is therefore a column 'x' and a column 'y' corresponding to the key dimensions and value dimensions respectively. An obvious visual representation of this data is a Table:",
"_____no_output_____"
]
],
[
[
"xs = np.arange(10)\nys = np.exp(xs)\n\ntable = hv.Table((xs, ys), kdims=['x'], vdims=['y'])\ntable",
"_____no_output_____"
]
],
[
[
"However, this data has many more meaningful visual representations, and therefore the first important concept is that Dataset objects are interchangeable as long as their dimensionality allows it, meaning that you can easily create the different objects from the same data (and cast between the objects once created):",
"_____no_output_____"
]
],
[
[
"hv.Scatter(table) + hv.Curve(table) + hv.Bars(table)",
"_____no_output_____"
]
],
[
[
"Each of these three plots uses the same data, but represents a different assumption about the semantic meaning of that data -- the Scatter plot is appropriate if that data consists of independent samples, the Curve plot is appropriate for samples chosen from an underlying smooth function, and the Bars plot is appropriate for independent categories of data. Since all these plots have the same dimensionality, they can easily be converted to each other, but there is normally only one of these representations that is semantically appropriate for the underlying data. For this particular data, the semantically appropriate choice is Curve, since the *y* values are samples from the continuous function ``exp``.\n\nAs a guide to which Elements can be converted to each other, those of the same dimensionality here should be interchangeable, because of the underlying similarity of their columnar representation:\n\n* 0D: BoxWhisker, Spikes, Distribution*, \n* 1D: Scatter, Curve, ErrorBars, Spread, Bars, BoxWhisker, Regression*\n* 2D: Points, HeatMap, Bars, BoxWhisker, Bivariate*\n* 3D: Scatter3D, TriSurface, VectorField, BoxWhisker, Bars\n\n\\* - requires Seaborn\n\nThis categorization is based only on the ``kdims``, which define the space in which the data has been sampled or defined. An Element can also have any number of value dimensions (``vdims``), which may be mapped onto various attributes of a plot such as the color, size, and orientation of the plotted items. For a reference of how to use these various Element types, see the [Elements Tutorial](Elements.ipynb).",
"_____no_output_____"
],
[
"## Data types and Constructors\n\nAs discussed above, Dataset provide an extensible interface to store and operate on data in different formats. All interfaces support a number of standard constructors.",
"_____no_output_____"
],
[
"#### Storage formats",
"_____no_output_____"
],
[
"Dataset types can be constructed using one of three supported formats, (a) a dictionary of columns, (b) an NxD array with N rows and D columns, or (c) pandas dataframes:",
"_____no_output_____"
]
],
[
[
"print(hv.Scatter({'x': xs, 'y': ys}) +\n hv.Scatter(np.column_stack([xs, ys])) +\n hv.Scatter(pd.DataFrame({'x': xs, 'y': ys})))",
"_____no_output_____"
]
],
[
[
"#### Literals\n\nIn addition to the main storage formats, Dataset Elements support construction from three Python literal formats: (a) An iterator of y-values, (b) a tuple of columns, and (c) an iterator of row tuples.",
"_____no_output_____"
]
],
[
[
"print(hv.Scatter(ys) + hv.Scatter((xs, ys)) + hv.Scatter(zip(xs, ys)))",
"_____no_output_____"
]
],
[
[
"For these inputs, the data will need to be copied to a new data structure, having one of the three storage formats above. By default Dataset will try to construct a simple array, falling back to either pandas dataframes (if available) or the dictionary-based format if the data is not purely numeric. Additionally, the interfaces will try to maintain the provided data's type, so numpy arrays and pandas DataFrames will therefore always be parsed by the array and dataframe interfaces first respectively.",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame({'x': xs, 'y': ys, 'z': ys*2})\nprint(type(hv.Scatter(df).data))",
"_____no_output_____"
]
],
[
[
"Dataset will attempt to parse the supplied data, falling back to each consecutive interface if the previous could not interpret the data. The default list of fallbacks and simultaneously the list of allowed datatypes is:",
"_____no_output_____"
]
],
[
[
"hv.Dataset.datatype",
"_____no_output_____"
]
],
[
[
"Note these include grid based datatypes, which are not covered in this tutorial. To select a particular storage format explicitly, supply one or more allowed datatypes:",
"_____no_output_____"
]
],
[
[
"print(type(hv.Scatter((xs.astype('float64'), ys), datatype=['array']).data))\nprint(type(hv.Scatter((xs, ys), datatype=['dictionary']).data))\nprint(type(hv.Scatter((xs, ys), datatype=['dataframe']).data))",
"_____no_output_____"
]
],
[
[
"#### Sharing Data",
"_____no_output_____"
],
[
"Since the formats with labelled columns do not require any specific order, each Element can effectively become a view into a single set of data. By specifying different key and value dimensions, many Elements can show different values, while sharing the same underlying data source.",
"_____no_output_____"
]
],
[
[
"overlay = hv.Scatter(df, kdims='x', vdims='y') * hv.Scatter(df, kdims='x', vdims='z')\noverlay",
"_____no_output_____"
]
],
[
[
"We can quickly confirm that the data is actually shared:",
"_____no_output_____"
]
],
[
[
"overlay.Scatter.I.data is overlay.Scatter.II.data",
"_____no_output_____"
]
],
[
[
"For columnar data, this approach is much more efficient than creating copies of the data for each Element, and allows for some advanced features like linked brushing in the [Bokeh backend](Bokeh_Backend.ipynb).",
"_____no_output_____"
],
[
"#### Converting to raw data",
"_____no_output_____"
],
[
"Column types make it easy to export the data to the three basic formats: arrays, dataframes, and a dictionary of columns.\n\n###### Array",
"_____no_output_____"
]
],
[
[
"table.array()",
"_____no_output_____"
]
],
[
[
"###### Pandas DataFrame",
"_____no_output_____"
]
],
[
[
"HTML(table.dframe().head().to_html())",
"_____no_output_____"
]
],
[
[
"###### Dataset dictionary",
"_____no_output_____"
]
],
[
[
"table.columns()",
"_____no_output_____"
]
],
[
[
"# Creating tabular data from Elements using the .table and .dframe methods\n\nIf you have data in some other HoloViews element and would like to use the columnar data features, you can easily tabularize any of the core Element types into a ``Table`` Element, using the ``.table()`` method. Similarly, the ``.dframe()`` method will convert an Element into a pandas DataFrame. These methods are very useful if you want to then transform the data into a different Element type, or to perform different types of analysis.\n\n## Tabularizing simple Elements\n\nFor a simple example, we can create a ``Curve`` of an exponential function and convert it to a ``Table`` with the ``.table`` method, with the same result as creating the Table directly from the data as done earlier on this Tutorial:",
"_____no_output_____"
]
],
[
[
"xs = np.arange(10)\ncurve = hv.Curve(zip(xs, np.exp(xs)))\ncurve * hv.Scatter(curve) + curve.table()",
"_____no_output_____"
]
],
[
[
"Similarly, we can get a pandas dataframe of the Curve using ``curve.dframe()``. Here we wrap that call as raw HTML to allow automated testing of this notebook, but just calling ``curve.dframe()`` would give the same result visually:",
"_____no_output_____"
]
],
[
[
"HTML(curve.dframe().to_html())",
"_____no_output_____"
]
],
[
[
"Although 2D image-like objects are *not* inherently well suited to a flat columnar representation, serializing them by converting to tabular data is a good way to reveal the differences between Image and Raster elements. Rasters are a very simple type of element, using array-like integer indexing of rows and columns from their top-left corner as in computer graphics applications. Conversely, Image elements are a higher-level abstraction that provides a general-purpose continuous Cartesian coordinate system, with x and y increasing to the right and upwards as in mathematical applications, and each point interpreted as a sample representing the pixel in which it is located (and thus centered within that pixel). Given the same data, the ``.table()`` representation will show how the data is being interpreted (and accessed) differently in the two cases (as explained in detail in the [Continuous Coordinates Tutorial](Continuous_Coordinates.ipynb)):",
"_____no_output_____"
]
],
[
[
"%%opts Points (s=200) [size_index=None]\nextents = (-1.6,-2.7,2.0,3)\nnp.random.seed(42)\nmat = np.random.rand(3, 3)\n\nimg = hv.Image(mat, bounds=extents)\nraster = hv.Raster(mat)\n\nimg * hv.Points(img) + img.table() + \\\nraster * hv.Points(raster) + raster.table()",
"_____no_output_____"
]
],
[
[
"## Tabularizing space containers\n\nEven deeply nested objects can be deconstructed in this way, serializing them to make it easier to get your raw data out of a collection of specialized Element types. Let's say we want to make multiple observations of a noisy signal. We can collect the data into a HoloMap to visualize it and then call ``.table()`` to get a columnar object where we can perform operations or transform it to other Element types. Deconstructing nested data in this way only works if the data is homogeneous. In practical terms, the requirement is that your data structure contains Elements (of any types) in these Container types: NdLayout, GridSpace, HoloMap, and NdOverlay, with all dimensions consistent throughout (so that they can all fit into the same set of columns).\n\nLet's now go back to the Image example. We will now collect a number of observations of some noisy data into a HoloMap and display it:",
"_____no_output_____"
]
],
[
[
"obs_hmap = hv.HoloMap({i: hv.Image(np.random.randn(10, 10), bounds=(0,0,3,3))\n for i in range(3)}, kdims=['Observation'])\nobs_hmap",
"_____no_output_____"
]
],
[
[
"Now we can serialize this data just as before, where this time we get a four-column (4D) table. The key dimensions of both the HoloMap and the Images, as well as the z-values of each Image, are all merged into a single table. We can visualize the samples we have collected by converting it to a Scatter3D object.",
"_____no_output_____"
]
],
[
[
"%%opts Layout [fig_size=150] Scatter3D [color_index=3 size_index=None] (cmap='hot' edgecolor='k' s=50)\nobs_hmap.table().to.scatter3d() + obs_hmap.table()",
"_____no_output_____"
]
],
[
[
"Here the `z` dimension is shown by color, as in the original images, and the other three dimensions determine where the datapoint is shown in 3D. This way of deconstructing will work for any data structure that satisfies the conditions described above, no matter how nested. If we vary the amount of noise while continuing to performing multiple observations, we can create an ``NdLayout`` of HoloMaps, one for each level of noise, and animated by the observation number.",
"_____no_output_____"
]
],
[
[
"from itertools import product\nextents = (0,0,3,3)\nerror_hmap = hv.HoloMap({(i, j): hv.Image(j*np.random.randn(3, 3), bounds=extents)\n for i, j in product(range(3), np.linspace(0, 1, 3))},\n kdims=['Observation', 'noise'])\nnoise_layout = error_hmap.layout('noise')\nnoise_layout",
"_____no_output_____"
]
],
[
[
"And again, we can easily convert the object to a ``Table``:",
"_____no_output_____"
]
],
[
[
"%%opts Table [fig_size=150]\nnoise_layout.table()",
"_____no_output_____"
]
],
[
[
"# Applying operations to the data",
"_____no_output_____"
],
[
"#### Sorting by columns",
"_____no_output_____"
],
[
"Once data is in columnar form, it is simple to apply a variety of operations. For instance, Dataset can be sorted by their dimensions using the ``.sort()`` method. By default, this method will sort by the key dimensions, but any other dimension(s) can be supplied to specify sorting along any other dimensions:",
"_____no_output_____"
]
],
[
[
"bars = hv.Bars((['C', 'A', 'B', 'D'], [2, 7, 3, 4]))\nbars + bars.sort() + bars.sort(['y'])",
"_____no_output_____"
]
],
[
[
"#### Working with categorical or grouped data",
"_____no_output_____"
],
[
"Data is often grouped in various ways, and the Dataset interface provides various means to easily compare between groups and apply statistical aggregates. We'll start by generating some synthetic data with two groups along the x-axis and 4 groups along the y axis.",
"_____no_output_____"
]
],
[
[
"n = np.arange(1000)\nxs = np.repeat(range(2), 500)\nys = n%4\nzs = np.random.randn(1000)\ntable = hv.Table((xs, ys, zs), kdims=['x', 'y'], vdims=['z'])\ntable",
"_____no_output_____"
]
],
[
[
"Since there are repeat observations of the same x- and y-values, we have to reduce the data before we display it or else use a datatype that supports plotting distributions in this way. The ``BoxWhisker`` type allows doing exactly that:",
"_____no_output_____"
]
],
[
[
"%%opts BoxWhisker [aspect=2 fig_size=200 bgcolor='w']\nhv.BoxWhisker(table)",
"_____no_output_____"
]
],
[
[
"### Aggregating/Reducing dimensions",
"_____no_output_____"
],
[
"Most types require the data to be non-duplicated before being displayed. For this purpose, HoloViews makes it easy to ``aggregate`` and ``reduce`` the data. These two operations are simple complements of each other--aggregate computes a statistic for each group in the supplied dimensions, while reduce combines all the groups except the supplied dimensions. Supplying only a function and no dimensions will simply aggregate or reduce all available key dimensions.",
"_____no_output_____"
]
],
[
[
"%%opts Bars [show_legend=False] {+axiswise}\nhv.Bars(table).aggregate(function=np.mean) + hv.Bars(table).reduce(x=np.mean)",
"_____no_output_____"
]
],
[
[
"(**A**) aggregates over both the x and y dimension, computing the mean for each x/y group, while (**B**) reduces the x dimension leaving just the mean for each group along y.",
"_____no_output_____"
],
[
"##### Collapsing multiple Dataset Elements",
"_____no_output_____"
],
[
"When multiple observations are broken out into a HoloMap they can easily be combined using the ``collapse`` method. Here we create a number of Curves with increasingly larger y-values. By collapsing them with a ``function`` and a ``spreadfn`` we can compute the mean curve with a confidence interval. We then simply cast the collapsed ``Curve`` to a ``Spread`` and ``Curve`` Element to visualize them.",
"_____no_output_____"
]
],
[
[
"hmap = hv.HoloMap({i: hv.Curve(np.arange(10)*i) for i in range(10)})\ncollapsed = hmap.collapse(function=np.mean, spreadfn=np.std)\nhv.Spread(collapsed) * hv.Curve(collapsed) + collapsed.table()",
"_____no_output_____"
]
],
[
[
"## Working with complex data\n\nIn the last section we only scratched the surface of what the Dataset interface can do. When it really comes into its own is when working with high-dimensional datasets. As an illustration, we'll load a dataset of some macro-economic indicators for OECD countries from 1964-1990, cached on the HoloViews website.",
"_____no_output_____"
]
],
[
[
"macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\\t')\nHTML(macro_df.head().to_html())",
"_____no_output_____"
]
],
[
[
"As we can see the data has abbreviated the names of the columns, which is convenient when referring to the variables but is often not what's desired when assigning axis labels, generating widgets, or adding titles.\n\nHoloViews dimensions provide a way to alias the variable names so you can continue to refer to the data by their short convenient ``name`` but can also provide a more descriptive ``label``. These can be declared explicitly when creating a Dimension but the most convenient way of specifying aliases is as a tuple where the first item is the ``name`` and the second the ``label``. \n\nHere will declare a list of key dimensions (i.e. the variables the data is indexed by) and a separate list of value dimensions (i.e. the actual observations), which we will use later when declaring a HoloViews object from our data.",
"_____no_output_____"
]
],
[
[
"key_dimensions = [('year', 'Year'), ('country', 'Country')]\nvalue_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),\n ('gdp', 'GDP Growth'), ('trade', 'Trade')]",
"_____no_output_____"
]
],
[
[
"We'll also take this opportunity to set default options for all the following plots.",
"_____no_output_____"
]
],
[
[
"%output dpi=100\noptions = hv.Store.options()\nopts = hv.Options('plot', aspect=2, fig_size=250, show_frame=False, show_grid=True, legend_position='right')\noptions.NdOverlay = opts\noptions.Overlay = opts",
"_____no_output_____"
]
],
[
[
"###### Loading the data\n\nAs we saw above, we can supply a dataframe to any Dataset type. When dealing with so many dimensions it would be cumbersome to supply all the dimensions explicitly, but luckily Dataset can easily infer the dimensions from the dataframe itself. We simply supply the ``kdims``, and it will infer that all other numeric dimensions should be treated as value dimensions (``vdims``).",
"_____no_output_____"
]
],
[
[
"macro = hv.Table(macro_df, kdims=key_dimensions, vdims=value_dimensions)",
"_____no_output_____"
]
],
[
[
"To get an overview of the data we'll quickly sort it and then view the data for one year.",
"_____no_output_____"
]
],
[
[
"%%opts Table [aspect=1.5 fig_size=300]\nmacro = macro.sort()\nmacro[1988]",
"_____no_output_____"
]
],
[
[
"Most of the examples above focus on converting a Table to simple Element types, but HoloViews also provides powerful container objects to explore high-dimensional data, such as [HoloMap](Containers.ipynb#HoloMap), [NdOverlay](Containers.ipynb#NdOverlay), [NdLayout](Containers.ipynb#NdLayout), and [GridSpace](Containers.ipynb#Layout). HoloMaps work as a useful interchange format from which you can conveniently convert to the other container types using its ``.overlay()``, ``.layout()``, and ``.grid()`` methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here ``Year`` is a key dimension and ``GDP Growth`` a value dimension. We are then left with the ``Country`` dimension, which we can overlay using the ``.overlay()`` method.",
"_____no_output_____"
]
],
[
[
"%%opts Curve (color=Palette('Set3'))\ngdp_curves = macro.to.curve('Year', 'GDP Growth')\ngdp_curves.overlay('Country')",
"_____no_output_____"
]
],
[
[
"Now that we've extracted the ``gdp_curves``, we can apply some operations to them. As in the simpler example above we will ``collapse`` the HoloMap of Curves using a number of functions to visualize the distribution of GDP Growth rates over time. First we find the mean curve with ``np.std`` as the ``spreadfn`` and cast the result to a ``Spread`` type, then we compute the min, mean and max curve in the same way and put them all inside an Overlay.",
"_____no_output_____"
]
],
[
[
"%%opts Overlay [bgcolor='w' legend_position='top_right'] Curve (color='k' linewidth=1) Spread (facecolor='gray' alpha=0.2)\nhv.Spread(gdp_curves.collapse('Country', np.mean, np.std), label='std') *\\\nhv.Overlay([gdp_curves.collapse('Country', fn).relabel(name).opts(style=dict(linestyle=ls))\n for name, fn, ls in [('max', np.max, '--'), ('mean', np.mean, '-'), ('min', np.min, '--')]])",
"_____no_output_____"
]
],
[
[
"Many HoloViews Element types support multiple ``kdims``, including ``HeatMap``, ``Points``, ``Scatter``, ``Scatter3D``, and ``Bars``. ``Bars`` in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group, and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally, we choose to color the ``Bars`` for each item in the stack.",
"_____no_output_____"
]
],
[
[
"%opts Bars [bgcolor='w' aspect=3 figure_size=450 show_frame=False]",
"_____no_output_____"
],
[
"%%opts Bars [category_index=2 stack_index=0 group_index=1 legend_position='top' legend_cols=7 color_by=['stack']] (color=Palette('Dark2'))\nmacro.to.bars(['Country', 'Year'], 'Trade', [])",
"_____no_output_____"
]
],
[
[
"This plot contains a lot of data, and so it's probably a good idea to focus on specific aspects of it, telling a simpler story about them. For instance, using the .select method we can then customize the palettes (e.g. to use consistent colors per country across multiple analyses).\n\nPalettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied ``sample_fn``, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the ``Set1`` colormap we draw colors only from the upper half of the palette and then reverse it.",
"_____no_output_____"
]
],
[
[
"%%opts Bars [padding=0.02 color_by=['group']] (alpha=0.6, color=Palette('Set1', reverse=True)[0.:.2])\ncountries = {'Belgium', 'Netherlands', 'Sweden', 'Norway'}\nmacro.to.bars(['Country', 'Year'], 'Unemployment').select(Year=(1978, 1985), Country=countries)",
"_____no_output_____"
]
],
[
[
"Many HoloViews Elements support multiple key and value dimensions. A HeatMap is indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other.\n\nBefore we display the Layout let's apply some styling; we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.",
"_____no_output_____"
]
],
[
[
"%opts HeatMap [show_values=False xticks=40 xrotation=90 aspect=1.2 invert_yaxis=True colorbar=True]",
"_____no_output_____"
],
[
"%%opts Layout [aspect_weight=1 fig_size=150 sublabel_position=(-0.2, 1.)]\nhv.Layout([macro.to.heatmap(['Year', 'Country'], value)\n for value in macro.data.columns[2:]]).cols(2)",
"_____no_output_____"
]
],
[
[
"Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements, for example, support multiple ``vdims``, which may be mapped onto the color and size of the drawn points in addition to the y-axis position. \n\nAs for the Curves above we supply 'Year' as the sole key dimension and rely on the Table to automatically convert the Country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment, to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.",
"_____no_output_____"
]
],
[
[
"%%opts Scatter [scaling_method='width' scaling_factor=2 size_index=2] (color=Palette('Set3') edgecolors='k')\ngdp_unem_scatter = macro.to.scatter('Year', ['GDP Growth', 'Unemployment'])\ngdp_unem_scatter.overlay('Country')",
"_____no_output_____"
]
],
[
[
"In this way we can plot any dimension against any other dimension, very easily allowing us to iterate through different ways of revealing relationships in the dataset.",
"_____no_output_____"
]
],
[
[
"%%opts NdOverlay [legend_cols=2] Scatter [size_index=1] (color=Palette('Blues'))\nmacro.to.scatter('GDP Growth', 'Unemployment', ['Year']).overlay()",
"_____no_output_____"
]
],
[
[
"This view, for example, immediately highlights the high unemployment rates of the 1980s.",
"_____no_output_____"
],
[
"Since all HoloViews Elements are composable, we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points (which indicate the unemployment rate by size) and annotate the data with some descriptions of what happened economically in these years.",
"_____no_output_____"
]
],
[
[
"%%opts Curve (color='k') Scatter [color_index=2 size_index=2 scaling_factor=1.4] (cmap='Blues' edgecolors='k')\n\nmacro_overlay = gdp_curves * gdp_unem_scatter\nannotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\\\nhv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\\n Recession', 'v')\nmacro_overlay * annotations",
"_____no_output_____"
]
],
[
[
"Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks. \n\nWhile looking at the plots individually like this allows us to study trends for each country, we may want to lay out a subset of the countries side by side, e.g. for non-interactive publications. We can easily achieve this by selecting the countries we want to view and and then applying the ``.layout`` method. We'll also want to restore the square aspect ratio so the plots compose nicely.",
"_____no_output_____"
]
],
[
[
"%%opts NdLayout [figure_size=100] Overlay [aspect=1] Scatter [color_index=2] (cmap='Reds')\ncountries = {'United States', 'Canada', 'United Kingdom'}\n(gdp_curves * gdp_unem_scatter).select(Country=countries).layout('Country')",
"_____no_output_____"
]
],
[
[
"Finally, let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country:",
"_____no_output_____"
]
],
[
[
"%%opts Scatter [color_index=2] (cmap='Reds') Overlay [aspect=1]\n(macro_overlay.relabel('GDP Growth', depth=1) +\\\nmacro.to.curve('Year', 'Unemployment', ['Country'], group='Unemployment',) +\\\nmacro.to.curve('Year', 'Trade', ['Country'], group='Trade') +\\\nmacro.to.scatter('GDP Growth', 'Unemployment', ['Country'])).cols(2)",
"_____no_output_____"
]
],
[
[
"As you can see, columnar data makes a huge range of analyses and visualizations quite straightforward! You can use these tools with many of the [Elements](Elements.ipynb) and [Containers](Containers.ipynb) available in HoloViews, to easily express what you want to visualize.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e732124fac7ac5dab33d11842a4fcfe56ffa1f25 | 9,166 | ipynb | Jupyter Notebook | qiskit-textbook/content/ch-gates/proving-universality.ipynb | RenatoFarruggio/Quantum-information-course-Basel | 6f85bbaf78dc6720463be6ecea84d427e5788f8f | [
"Apache-2.0"
] | 1 | 2020-08-05T15:42:49.000Z | 2020-08-05T15:42:49.000Z | qiskit-textbook/content/ch-gates/proving-universality.ipynb | RenatoFarruggio/Quantum-information-course-Basel | 6f85bbaf78dc6720463be6ecea84d427e5788f8f | [
"Apache-2.0"
] | null | null | null | qiskit-textbook/content/ch-gates/proving-universality.ipynb | RenatoFarruggio/Quantum-information-course-Basel | 6f85bbaf78dc6720463be6ecea84d427e5788f8f | [
"Apache-2.0"
] | null | null | null | 52.678161 | 541 | 0.658412 | [
[
[
"# Proving Universality",
"_____no_output_____"
],
[
"What does it mean for a computer to do everything that it could possibly do? This was a question tackled by Alan Turing before we even had a good idea of what a computer was.\n\nTo ask this question for our classical computers, and specifically for our standard digital computers, we need to strip away all the screens, speakers and fancy input devices. What we are left with is simply a machine that converts input bit strings into output bit strings. If a device can perform any such conversion, taking any arbitrary set of inputs and coverting them to an arbitrarily chosen set of outputs, we call it *universal*.\n\nIt turns out that the requirements for universality on these devices are quite reasonable. The gates we needed to perform addition in 'The atoms of computation' are also sufficient to implement any possible computation. In fact, just the classical NAND gate is enough, when combined together in sufficient quantities.\n\nThough our current computers can do everything in theory, some tasks are too resource-intensive in practice. In our study of how to add, we saw that the required resources scaled linearly with the problem size. For example, if we double the number of digits in the numbers, we double the number of small scale additions we need to make.\n\nFor many other problems, the required resources scale exponentially with the input size. Factorization is a prominent example. In a recent study [1], a 320-digit number took CPU years to factorize. For numbers that are not much larger, there aren't enough computing resources in the world to tackle them -- even though those same numbers could be added or multiplied on just a smartphone in a much more reasonable time.\n\nQuantum computers will alleviate these problems by achieving universality in a fundamentally different way. As we saw in 'The unique properties of qubits', the variables of quantum computing are not equivalent to those of standard computers. The gates that we use, such as those in the last section, go beyond what is possible for the gates of standard computers. Because of this, we can find ways to achieve results that are otherwise impossible.\n\nSo how to define what universality is for a quantum computer? We can do this in a way that mirrors the definition discussed above. Just as digital computers convert sets of input bit strings to sets of output bit strings, unitary operations convert sets of orthogonal input states into orthogonal output states.\n\nAs a special case, these states could describe bit strings expressed in quantum form. If we can achieve any unitary, we can therefore achieve universality in the same way as for digital computers.\n\nAnother special case is that the input and output states could describe real physical systems. The unitary would then correspond to a time evolution. When expressed in an exponential form using a suitable Hermitian matrix, that matrix would correspond to the Hamiltonian. Achieving any unitary would therefore correspond to simulating any time evolution, and engineering the effects of any Hamiltonian. This is also an important problem that is impractical for classical computers, but is a natural application of quantum computers.\n\nUniversality for quantum computers is then simply this: the ability to achieve any desired unitary on any arbitrary number of qubits.\n\nAs for classical computers, we will need to split this big job up into manageable chunks. We'll need to find a basic set of gates that will allow us to achieve this. As we'll see, the single- and two-qubit gates of the last section are sufficient for the task.\n\nSuppose we wish to implement the unitary\n\n$$\nU = e^{i(aX + bZ)},\n$$\n\nbut the only gates we have are $R_x(\\theta) = e^{i \\frac{\\theta}{2} X}$ and $R_z(\\theta) = e^{i \\frac{\\theta}{2} Z}$. The best way to solve this problem would be to use Euler angles. But let's instead consider a different method.\n\nThe Hermitian matrix in the exponential for $U$ is simply the sum of those for the $R_x(\\theta)$ and $R_z(\\theta)$ rotations. This suggests a naive approach to solving our problem: we could apply $R_z(2b) = e^{i bZ}$ followed by $R_x(2a) = e^{i a X}$. Unfortunately, because we are exponentiating matrices that do not commute, this approach will not work.\n\n$$\ne^{i a X} e^{i b X} \\neq e^{i(aX + bZ)}\n$$\n\nHowever, we could use the following modified version:\n\n$$\nU = \\lim_{n\\rightarrow\\infty} ~ \\left(e^{iaX/n}e^{ibZ/n}\\right)^n.\n$$\n\nHere we split $U$ up into $n$ small slices. For each slice, it is a good approximation to say that\n\n$$\ne^{iaX/n}e^{ibZ/n} = e^{i(aX + bZ)/n}\n$$\n\nThe error in this approximation scales as $1/n^2$. When we combine the $n$ slices, we get an approximation of our target unitary whose error scales as $1/n$. So by simply increasing the number of slices, we can get as close to $U$ as we need. Other methods of creating the sequence are also possible to get even more accurate versions of our target unitary.\n\nThe power of this method is that it can be used in complex cases than just a single qubit. For example, consider the unitary \n\n$$\nU = e^{i(aX\\otimes X\\otimes X + bZ\\otimes Z\\otimes Z)}.\n$$\n\nWe know how to create the unitary $e^{i\\frac{\\theta}{2} X\\otimes X\\otimes X}$ from a single qubit $R_x(\\theta)$ and two controlled-NOTs.\n\n```python\nqc.cx(0,2)\nqc.cx(0,1)\nqc.rx(theta,0)\nqc.cx(0,1)\nqc.cx(0,1)\n```\n\nWith a few Hadamards, we can do the same for $e^{i\\frac{\\theta}{2} Z\\otimes Z\\otimes Z}$.\n\n```python\nqc.h(0)\nqc.h(1)\nqc.h(2)\nqc.cx(0,2)\nqc.cx(0,1)\nqc.rx(theta,0)\nqc.cx(0,1)\nqc.cx(0,1)\nqc.h(2)\nqc.h(1)\nqc.h(0)\n```\n\nThis gives us the ability to reproduce a small slice of our new, three-qubit $U$:\n\n$$\ne^{iaX\\otimes X\\otimes X/n}e^{ibZ\\otimes Z\\otimes Z/n} = e^{i(aX\\otimes X\\otimes X + bZ\\otimes Z\\otimes Z)/n}.\n$$\n\nAs before, we can then combine the slices together to get an arbitrarily accurate approximation of $U$.\n\nThis method continues to work as we increase the number of qubits, and also the number of terms that need simulating. Care must be taken to ensure that the approximation remains accurate, but this can be done in ways that require reasonable resources. Adding extra terms to simulate, or increasing the desired accuracy, only require the complexity of the method to increase polynomially.\n\nMethods of this form can reproduce any unitary $U = e^{iH}$ for which $H$ can be expressed as a sum of tensor products of Paulis. Since we have shown previously that all matrices can be expressed in this way, this is sufficient to show that we can reproduce all unitaries. Though other methods may be better in practice, the main concept to take away from this chapter is that there is certainly a way to reproduce all multi-qubit unitaries using only the basic operations found in Qiskit. Quantum universality can be achieved.",
"_____no_output_____"
],
[
"### References\n\n[1] [\"Factorization of a 1061-bit number by the Special Number Field Sieve\"](https://eprint.iacr.org/2012/444.pdf) by Greg Childers.",
"_____no_output_____"
]
],
[
[
"import qiskit\nqiskit.__qiskit_version__",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
e7321262765839437795758932147f12bd038c87 | 491,915 | ipynb | Jupyter Notebook | notebooks/pipeline.ipynb | mbeltagy/ExamplePlots.jl | f1ca662a4f73f31d2d99ff3b0be798a4945f40f0 | [
"MIT"
] | 66 | 2017-01-28T13:05:49.000Z | 2021-11-30T05:07:39.000Z | notebooks/pipeline.ipynb | mbeltagy/ExamplePlots.jl | f1ca662a4f73f31d2d99ff3b0be798a4945f40f0 | [
"MIT"
] | 6 | 2018-09-23T12:59:45.000Z | 2020-07-03T09:48:37.000Z | notebooks/pipeline.ipynb | mbeltagy/ExamplePlots.jl | f1ca662a4f73f31d2d99ff3b0be798a4945f40f0 | [
"MIT"
] | 18 | 2017-03-06T10:41:15.000Z | 2021-08-30T15:29:09.000Z | 58.214793 | 42,406 | 0.538656 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e73215fd03c4460aea54e55a4d2d776f201060cd | 32,684 | ipynb | Jupyter Notebook | site/ru/tutorials/keras/classification.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | 1 | 2021-09-23T09:56:29.000Z | 2021-09-23T09:56:29.000Z | site/ru/tutorials/keras/classification.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | null | null | null | site/ru/tutorials/keras/classification.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | 1 | 2020-06-23T13:30:15.000Z | 2020-06-23T13:30:15.000Z | 32.553785 | 743 | 0.532156 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Обучи свою первую нейросеть: простая классификация",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Смотрите на TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ru/tutorials/keras/classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Запустите в Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ru/tutorials/keras/classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Изучайте код на GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ru/tutorials/keras/classification.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Скачайте ноутбук</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ru).",
"_____no_output_____"
],
[
"Это руководство поможет тебе обучить нейросеть, которая классифицирует изображения одежды, например, кроссовки и рубашки. Это нормально, если не все будет понятно сразу: это быстрый, ознакомительный обзор полной программы TensorFlow, где новые детали объясняются по мере их появления.\n\nРуководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow.",
"_____no_output_____"
]
],
[
[
"# TensorFlow и tf.keras\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Вспомогательные библиотеки\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## Загружаем датасет Fashion MNIST",
"_____no_output_____"
],
[
"Это руководство использует датасет [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) который содержит 70,000 монохромных изображений в 10 категориях. На каждом изображении содержится по одному предмету одежды в низком разрешении (28 на 28 пикселей):\n\n<table>\n <tr><td>\n <img src=\"https://tensorflow.org/images/fashion-mnist-sprite.png\"\n alt=\"Fashion MNIST sprite\" width=\"600\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://github.com/zalandoresearch/fashion-mnist\"> Образцы Fashion-MNIST</a> (Zalando, лицензия MIT).<br/> \n </td></tr>\n</table>\n\nFashion MNIST предназначен для замены классического датасета [MNIST](http://yann.lecun.com/exdb/mnist/) который часто используют как \"Hello, World\" программ машинного обучения для компьютерного зрения. Датасет MNIST содержит изображения рукописных цифр (0, 1, 2, и т.д.) в формате идентичном формату изображений одежды которыми мы будем пользоваться здесь.\n\nЭто руководство для разнообразия использует Fashion MNIST, и еще потому, что это проблема немного сложнее чем обычный MNIST. Оба датасета относительно малы, и используются для проверки корректности работы алгоритма. Это хорошие отправные точки для тестирования и отладки кода.\n\nМы используем 60,000 изображений для обучения нейросети и 10,000 изображений чтобы проверить, насколько правильно сеть обучилась их классифицировать. Вы можете получить доступ к Fashion MNIST прямо из TensorFlow. Импортируйте и загрузите данные Fashion MNIST прямо из TensorFlow:",
"_____no_output_____"
]
],
[
[
"fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()",
"_____no_output_____"
]
],
[
[
"Загрузка датасета возвращает четыре массива NumPy:\n\n* Массивы `train_images` и `train_labels` являются *тренировочным сетом* — данными, на которых модель будет обучаться.\n* Модель тестируется на *проверочном сете*, а именно массивах `test_images` и `test_labels`.\n\nИзображения являются 28х28 массивами NumPy, где значение пикселей варьируется от 0 до 255. *Метки (labels)* - это массив целых чисел от 0 до 9. Они соответствуют *классам* одежды изображенной на картинках:\n\n<table>\n <tr>\n <th>Label</th>\n <th>Class</th>\n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td>\n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td>\n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td>\n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td>\n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td>\n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td>\n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td>\n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td>\n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td>\n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td>\n </tr>\n</table>\n\nКаждому изображению соответствует единственная метка. Так как *названия классов* не включены в датасет, сохраним их тут для дальнейшего использования при построении изображений:",
"_____no_output_____"
]
],
[
[
"class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"## Изучите данные\n\nДавайте посмотрим на формат данных перед обучением модели. Воспользовавшись shape мы видим, что в тренировочном датасете 60,000 изображений, каждое размером 28 x 28 пикселей:",
"_____no_output_____"
]
],
[
[
"train_images.shape",
"_____no_output_____"
]
],
[
[
"Соответственно, в тренировочном сете 60,000 меток:",
"_____no_output_____"
]
],
[
[
"len(train_labels)",
"_____no_output_____"
]
],
[
[
"Каждая метка это целое число от 0 до 9:",
"_____no_output_____"
]
],
[
[
"train_labels",
"_____no_output_____"
]
],
[
[
"Проверочный сет содержит 10,000 изображений, каждое - также 28 на 28 пикселей:",
"_____no_output_____"
]
],
[
[
"test_images.shape",
"_____no_output_____"
]
],
[
[
"И в проверочном сете - ровно 10,000 меток:",
"_____no_output_____"
]
],
[
[
"len(test_labels)",
"_____no_output_____"
]
],
[
[
"## Предобработайте данные\n\nДанные должны быть предобработаны перед обучением нейросети. Если вы посмотрите на первое изображение в тренировочном сете вы увидите, что значения пикселей находятся в диапазоне от 0 до 255:",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.imshow(train_images[0])\nplt.colorbar()\nplt.grid(False)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Мы масштабируем эти значения к диапазону от 0 до 1 перед тем как скормить их нейросети. Для этого мы поделим значения на 255. Важно, чтобы *тренировочный сет* и *проверочный сет* были предобработаны одинаково:",
"_____no_output_____"
]
],
[
[
"train_images = train_images / 255.0\n\ntest_images = test_images / 255.0",
"_____no_output_____"
]
],
[
[
"Чтобы убедиться, что данные в правильном формате и мы готовы построить и обучить нейросеть, выведем на экран первые 25 изображений из *тренировочного сета* и отобразим под ними наименования их классов.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Постройте модель\n\nПостроение модели нейронной сети требует правильной конфигурации каждого слоя, и последующей компиляции модели.",
"_____no_output_____"
],
[
"### Настройте слои\n\nБазовым строительным блоком нейронной сети является *слой*. Слои извлекают образы из данных, которые в них подаются. Надеемся, что эти образы имеют смысл для решаемой задачи.\n\nБольшая часть глубокого обучения состоит из соединения в последовательность простых слоев. Большинство слоев, таких как tf.keras.layers.Dense, имеют параметры, которые настраиваются во время обучения.",
"_____no_output_____"
]
],
[
[
"model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation='relu'),\n keras.layers.Dense(10, activation='softmax')\n])",
"_____no_output_____"
]
],
[
[
"Первый слой этой сети - `tf.keras.layers.Flatten`, преобразует формат изображения из двумерного массива (28 на 28 пикселей) в одномерный (размерностью 28 * 28 = 784 пикселя). Слой извлекает строки пикселей из изображения и выстраивает их в один ряд. Этот слой не имеет параметров для обучения; он только переформатирует данные.\n\nПосле разложения пикселей, нейросеть содержит два слоя `tf.keras.layers.Dense`. Это полносвязные нейронные слои. Первый `Dense` слой состоит из 128 узлов (или нейронов). Второй (и последний) 10-узловой *softmax* слой возвращает массив из 10 вероятностных оценок дающих в сумме 1. Каждый узел содержит оценку указывающую вероятность принадлежности изображения к одному из 10 классов.\n\n### Скомпилируйте модель\n\nПрежде чем модель будет готова для обучения, нам нужно указать еще несколько параметров. Они добавляются на шаге *compile* модели:\n\n* *Функция потерь (Loss function)* — измеряет точность модели во время обучения. Мы хотим минимизировать эту функцию чтоб \"направить\" модель в верном направлении.\n* *Оптимизатор (Optimizer)* — показывает каким образом обновляется модель на основе входных данных и функции потерь.\n* *Метрики (Metrics)* — используются для мониторинга тренировки и тестирования модели. Наш пример использует метрику *accuracy* равную доле правильно классифицированных изображений.",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## Обучите модель\n\nОбучение модели нейронной сети требует выполнения следующих шагов::\n\n1. Подайте тренировочный данные в модель. В этом примере тренировочные данные это массивы `train_images` и `train_labels`.\n2. Модель учится ассоциировать изображения с правильными классами.\n3. Мы просим модель сделать прогнозы для проверочных данных, в этом примере массив test_images. Мы проверяем, соответствуют ли предсказанные классы меткам из массива test_labels.\n\nДля начала обучения, вызовите метод `model.fit`, который называется так, поскольку \"тренирует (fits)\" модель на тренировочных данных:",
"_____no_output_____"
]
],
[
[
"model.fit(train_images, train_labels, epochs=10)",
"_____no_output_____"
]
],
[
[
"В процессе обучения модели отображаются метрики потери (loss) и точности (accuracy). Эта модель достигает на тренировочных данных точности равной приблизительно 0.88 (88%).",
"_____no_output_____"
],
[
"## Оцените точность\n\nДалее, сравните какую точность модель покажет на проверчном датасете:",
"_____no_output_____"
]
],
[
[
"test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\n\nprint('\\nТочность на проверочных данных:', test_acc)",
"_____no_output_____"
]
],
[
[
"Полученная на проверочном сете точность оказалась немного ниже, чем на тренировочном. Этот разрыв между точностью на тренировке и тесте является примером *переобучения (overfitting)* . Переобучение возникает, когда модель машинного обучения показывает на новых данных худший результат, чем на тех, на которых она обучалась.",
"_____no_output_____"
],
[
"## Сделайте предсказания\n\nТеперь, когда модель обучена, мы можем использовать ее чтобы сделать предсказания по поводу нескольких изображений:",
"_____no_output_____"
]
],
[
[
"predictions = model.predict(test_images)",
"_____no_output_____"
]
],
[
[
"Здесь полученная модель предсказала класс одежды для каждого изображения в проверочном датасете. Давайте посмотрим на первое предсказание:",
"_____no_output_____"
]
],
[
[
"predictions[0]",
"_____no_output_____"
]
],
[
[
"Прогноз представляет из себя массив из 10 чисел. Они описывают \"уверенность\" (confidence) модели в том, насколько изображение соответствует каждому из 10 разных видов одежды. Мы можем посмотреть какой метке соответствует максимальное значение:",
"_____no_output_____"
]
],
[
[
"np.argmax(predictions[0])",
"_____no_output_____"
]
],
[
[
"Модель полагает, что на первой картинке изображен ботинок (ankle boot), или class_names[9]. Проверка показывает, что классификация верна:",
"_____no_output_____"
]
],
[
[
"test_labels[0]",
"_____no_output_____"
]
],
[
[
"Мы можем построить график, чтобы взглянуть на полный набор из 10 предсказаний классов.",
"_____no_output_____"
]
],
[
[
"def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n\n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n\n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array[i], true_label[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1])\n predicted_label = np.argmax(predictions_array)\n\n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')",
"_____no_output_____"
]
],
[
[
"Давайте посмотрим на нулевое изображение, предсказание и массив предсказаний.",
"_____no_output_____"
]
],
[
[
"i = 0\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
],
[
"i = 12\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Давайте посмотрим несколько изображений с их прогнозами. Цвет верных предсказаний синий, а неверных - красный. Число это процент уверенности (от 100) для предсказанной метки. Отметим, что модель может ошибаться даже если она очень уверена.",
"_____no_output_____"
]
],
[
[
"# Отображаем первые X тестовых изображений, их предсказанную и настоящую метки.\n# Корректные предсказания окрашиваем в синий цвет, ошибочные в красный.\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Наконец, используем обученную модель для предсказания класса на одном изображении.",
"_____no_output_____"
]
],
[
[
"# Берем одну картинку из проверочного сета.\nimg = test_images[0]\n\nprint(img.shape)",
"_____no_output_____"
]
],
[
[
"Модели tf.keras оптимизированы для предсказаний на *пакетах (batch)* данных, или на множестве примеров сразу. Таким образом, даже если мы используем всего 1 картинку, нам все равно необходимо добавить ее в список:",
"_____no_output_____"
]
],
[
[
"# Добавляем изображение в пакет данных, состоящий только из одного элемента.\nimg = (np.expand_dims(img,0))\n\nprint(img.shape)",
"_____no_output_____"
]
],
[
[
"Сейчас предскажем правильную метку для изображения:",
"_____no_output_____"
]
],
[
[
"predictions_single = model.predict(img)\n\nprint(predictions_single)",
"_____no_output_____"
],
[
"plot_value_array(0, predictions_single, test_labels)\n_ = plt.xticks(range(10), class_names, rotation=45)",
"_____no_output_____"
]
],
[
[
"Метод `model.predict` возвращает нам список списков, по одному для каждой картинки в пакете данных. Получите прогнозы для нашего (единственного) изображения в пакете:",
"_____no_output_____"
]
],
[
[
"np.argmax(predictions_single[0])",
"_____no_output_____"
]
],
[
[
"И, как и ранее, модель предсказывает класс 9.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e73251cb0de7a27b744580478ddd058ed5480341 | 182,114 | ipynb | Jupyter Notebook | dashboard.ipynb | mrajkarnikar/06-PyViz | 9c66ea0bfa79c55fc1cfc6e44f6b663b145a79a2 | [
"RSA-MD"
] | null | null | null | dashboard.ipynb | mrajkarnikar/06-PyViz | 9c66ea0bfa79c55fc1cfc6e44f6b663b145a79a2 | [
"RSA-MD"
] | null | null | null | dashboard.ipynb | mrajkarnikar/06-PyViz | 9c66ea0bfa79c55fc1cfc6e44f6b663b145a79a2 | [
"RSA-MD"
] | null | null | null | 27.795177 | 7,615 | 0.380487 | [
[
[
"# San Francisco Rental Prices Dashboard\n\nIn this notebook, you will compile the visualizations from the previous analysis into functions that can be used for a Panel dashboard.",
"_____no_output_____"
]
],
[
[
"# imports\nimport panel as pn\npn.extension('plotly')\nimport plotly.express as px\nimport pandas as pd\nimport hvplot.pandas\nimport matplotlib.pyplot as plt\nimport os\nfrom pathlib import Path\nfrom dotenv import load_dotenv\nfrom panel.interact import interact",
"_____no_output_____"
],
[
"# Read the Mapbox API key\nload_dotenv()\nmap_box_api = os.getenv(\"mapbox\")\npx.set_mapbox_access_token(map_box_api)",
"_____no_output_____"
]
],
[
[
"# Import Data",
"_____no_output_____"
]
],
[
[
"# Import the necessary CSVs to Pandas DataFrames\nsfo_data = pd.read_csv(Path(\"Data/sfo_neighborhoods_census_data.csv\"), index_col=\"year\")\n\nneighborhood_locations = pd.read_csv(Path(\"Data/neighborhoods_coordinates.csv\"))\nneighborhood_locations.columns = [\"neighborhood\", \"Lat\", \"Lon\"]\nprint(sfo_data.head())\nprint(neighborhood_locations.head())",
" neighborhood sale_price_sqr_foot housing_units gross_rent\nyear \n2010 Alamo Square 291.182945 372560 1239\n2010 Anza Vista 267.932583 372560 1239\n2010 Bayview 170.098665 372560 1239\n2010 Buena Vista Park 347.394919 372560 1239\n2010 Central Richmond 319.027623 372560 1239\n neighborhood Lat Lon\n0 Alamo Square 37.791012 -122.402100\n1 Anza Vista 37.779598 -122.443451\n2 Bayview 37.734670 -122.401060\n3 Bayview Heights 37.728740 -122.410980\n4 Bernal Heights 37.728630 -122.443050\n"
]
],
[
[
"- - -",
"_____no_output_____"
],
[
"## Panel Visualizations\n\nIn this section, you will copy the code for each plot type from your analysis notebook and place it into separate functions that Panel can use to create panes for the dashboard. \n\nThese functions will convert the plot object to a Panel pane.\n\nBe sure to include any DataFrame transformation/manipulation code required along with the plotting code.\n\nReturn a Panel pane object from each function that can be used to build the dashboard.\n\nNote: Remove any `.show()` lines from the code. We want to return the plots instead of showing them. The Panel dashboard will then display the plots.",
"_____no_output_____"
]
],
[
[
"# caculations reused for different functions below \ntop_ten_expensive_neighborhood=sfo_data.groupby(['neighborhood']).mean().sort_values(by=['sale_price_sqr_foot'],ascending=False).reset_index().head(10)\nneighborhood_mean=sfo_data.groupby(['neighborhood']).mean().sort_values(by=['sale_price_sqr_foot'],ascending=False).reset_index()\ncombined_df=pd.concat([neighborhood_mean.set_index('neighborhood'),neighborhood_locations.set_index('neighborhood')],axis='columns',join='inner').reset_index()\ndf_expensive_neighborhoods_per_year = sfo_data[sfo_data[\"neighborhood\"].isin(top_ten_expensive_neighborhood[\"neighborhood\"])].reset_index()\ndf_expensive_neighborhoods=sfo_data.groupby(['neighborhood']).mean().sort_values(by=['sale_price_sqr_foot'],ascending=False).reset_index().head(10)\n",
"_____no_output_____"
],
[
"\n# Define Panel Visualization Functions\ndef housing_units_per_year():\n \"\"\"Housing Units Per Year.\"\"\"\n housing_units = sfo_data.groupby('year').mean()\n \n min = housing_units.min()['housing_units']\n max = housing_units.max()['housing_units']\n\n plot=housing_units.hvplot.bar(x='year',y='housing_units',ylim =(min-2000, max+2000),title=\"Average Housing Units/Year in San Francisco\", figsize=(6,4),yformatter='%.0f')\n return plot\n\n\ndef average_gross_rent():\n \"\"\"Average Gross Rent in San Francisco Per Year.\"\"\"\n avg_sfo_data_df=sfo_data.groupby(sfo_data.index)['sale_price_sqr_foot','gross_rent'].mean()\n return avg_sfo_data_df['gross_rent'].hvplot( figsize=(6,4))\n\ndef average_sales_price():\n \"\"\"Average Sales Price Per Year.\"\"\"\n avg_sfo_data_df=sfo_data.groupby(sfo_data.index)['sale_price_sqr_foot','gross_rent'].mean()\n return avg_sfo_data_df['sale_price_sqr_foot'].hvplot( figsize=(6,4))\n\ndef average_price_by_neighborhood():\n \"\"\"Average Prices by Neighborhood.\"\"\"\n mean_by_year_and_neighborhood=sfo_data.groupby([sfo_data.index,'neighborhood']).mean().reset_index()\n mean_by_year_and_neighborhood.head()\n return mean_by_year_and_neighborhood.hvplot(x='year',y='sale_price_sqr_foot',groupby='neighborhood')\n\ndef top_most_expensive_neighborhoods():\n \"\"\"Top 10 Most Expensive Neighborhoods.\"\"\"\n top_ten_expensive_neighborhood=sfo_data.groupby(['neighborhood']).mean().sort_values(by=['sale_price_sqr_foot'],ascending=False).reset_index().head(10)\n return top_ten_expensive_neighborhood.hvplot.bar(y='sale_price_sqr_foot',x='neighborhood',ylabel='Avg Sale Price per Square Foot',groupby='neighborhood')\n\n\ndef most_expensive_neighborhoods_rent_sales():\n \"\"\"Comparison of Rent and Sales Prices of Most Expensive Neighborhoods.\"\"\" \n mean_by_year_and_neighborhood=sfo_data.groupby([sfo_data.index,'neighborhood']).mean().reset_index()\n return mean_by_year_and_neighborhood.hvplot.bar(\"year\", [\"gross_rent\", \"sale_price_sqr_foot\"],groupby= \"neighborhood\",rot=90)\n \n\n \ndef parallel_coordinates():\n \"\"\"Parallel Coordinates Plot.\"\"\"\n return px.parallel_categories(\n df_expensive_neighborhoods,\n dimensions=['neighborhood','sale_price_sqr_foot','housing_units','gross_rent'],\n color=\"sale_price_sqr_foot\",\n color_continuous_scale=px.colors.sequential.Inferno,\n )\n\n\n\ndef parallel_categories():\n \"\"\"Parallel Categories Plot.\"\"\"\n return px.parallel_coordinates(df_expensive_neighborhoods,dimensions=['sale_price_sqr_foot','housing_units','gross_rent'] ,color='sale_price_sqr_foot')\n\n\n\ndef neighborhood_map():\n \"\"\"Neighborhood Map.\"\"\"\n\n mapbox_token=os.environ['mapbox']\n # Create a scatter mapbox to analyze neighborhood info\n px.set_mapbox_access_token(map_box_api)\n\n plot= px.scatter_mapbox(\n combined_df,\n lat=\"Lat\",\n lon=\"Lon\",\n size=\"sale_price_sqr_foot\",\n color=\"neighborhood\",\n zoom=10\n )\n return plot\n\n\ndef sunburst():\n \"\"\"Sunburst Plot.\"\"\"\n \n return px.sunburst(df_expensive_neighborhoods_per_year, path=['year', 'neighborhood'], values='sale_price_sqr_foot',color='gross_rent',color_continuous_scale=\"blues\", title = \"Cost Analysis of Most Expensive Neighborhoods in San Francisco per Year\")\n\n",
"_____no_output_____"
]
],
[
[
"## Panel Dashboard\n\nIn this section, you will combine all of the plots into a single dashboard view using Panel. Be creative with your dashboard design!",
"_____no_output_____"
]
],
[
[
"\n\ntitle = '##Real Estate Analysis of San Francisco'\nwelcome_tab = pn.Column(pn.Column(title), neighborhood_map())\n\nmarket_analysis_row = pn.Row(housing_units_per_year(), average_gross_rent(), average_sales_price())\n\n\nneighborhood_analysis_tab = pn.Column(average_price_by_neighborhood(),\n top_most_expensive_neighborhoods()\n)\n\nparallel_plots_tab = pn.Column( \n parallel_categories(),\n parallel_coordinates()\n)\n\nparallel_plots_tab = pn.Column( \n parallel_categories(),\n parallel_coordinates()\n)\n\n# Create tabs\n\nall_tabs = pn.Tabs((\"Welcome\", welcome_tab), \n(\"Yearly Market Analysis\", market_analysis_row),\n(\"Neighborhood Analysis\", neighborhood_analysis_tab), \n(\"Parallel Plot Analysis\", parallel_plots_tab),\n(\"Sunburst Plot Analysis\", sunburst())\n)\n\n\n# Create the dashboard\nSF_dashboard= pn.Column( \n \"#Real Estate Analysis of SFO from 2010 to 2016\",\n all_tabs\n)",
"/Users/manishrajkarnikar/opt/anaconda3/envs/alpacaenv/lib/python3.7/site-packages/ipykernel_launcher.py:15: FutureWarning:\n\nIndexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n\n/Users/manishrajkarnikar/opt/anaconda3/envs/alpacaenv/lib/python3.7/site-packages/ipykernel_launcher.py:20: FutureWarning:\n\nIndexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n\n"
]
],
[
[
"## Serve the Panel Dashboard",
"_____no_output_____"
]
],
[
[
"# Serve the# dashboard\nSF_dashboard.servable()",
"_____no_output_____"
]
],
[
[
"# Debugging\n\nNote: Some of the Plotly express plots may not render in the notebook through the panel functions.\n\nHowever, you can test each plot by uncommenting the following code",
"_____no_output_____"
]
],
[
[
"housing_units_per_year()",
"_____no_output_____"
],
[
"average_gross_rent()",
"/Users/manishrajkarnikar/opt/anaconda3/envs/alpacaenv/lib/python3.7/site-packages/ipykernel_launcher.py:15: FutureWarning:\n\nIndexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n\n"
],
[
"average_sales_price()",
"/Users/manishrajkarnikar/opt/anaconda3/envs/alpacaenv/lib/python3.7/site-packages/ipykernel_launcher.py:20: FutureWarning:\n\nIndexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n\n"
],
[
"average_price_by_neighborhood()",
"_____no_output_____"
],
[
"top_most_expensive_neighborhoods()",
"_____no_output_____"
],
[
"most_expensive_neighborhoods_rent_sales()",
"_____no_output_____"
],
[
"neighborhood_map()",
"_____no_output_____"
],
[
"parallel_categories()",
"_____no_output_____"
],
[
"parallel_coordinates()",
"_____no_output_____"
],
[
"sunburst()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7325cdf4d4f0f2b2c173027584902a2ce56a3f1 | 11,531 | ipynb | Jupyter Notebook | cs231n_assignment1/softmax.ipynb | gongjue/cs231n | 2eb5d38eee8732b3286f6c18aa519b1181f180f5 | [
"MIT"
] | null | null | null | cs231n_assignment1/softmax.ipynb | gongjue/cs231n | 2eb5d38eee8732b3286f6c18aa519b1181f180f5 | [
"MIT"
] | null | null | null | cs231n_assignment1/softmax.ipynb | gongjue/cs231n | 2eb5d38eee8732b3286f6c18aa519b1181f180f5 | [
"MIT"
] | null | null | null | 37.196774 | 290 | 0.573758 | [
[
[
"# Softmax exercise\n\n*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*\n\nThis exercise is analogous to the SVM exercise. You will:\n\n- implement a fully-vectorized **loss function** for the Softmax classifier\n- implement the fully-vectorized expression for its **analytic gradient**\n- **check your implementation** with numerical gradient\n- use a validation set to **tune the learning rate and regularization** strength\n- **optimize** the loss function with **SGD**\n- **visualize** the final learned weights\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading extenrnal modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):\n \"\"\"\n Load the CIFAR-10 dataset from disk and perform preprocessing to prepare\n it for the linear classifier. These are the same steps as we used for the\n SVM, but condensed to a single function. \n \"\"\"\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n mask = np.random.choice(num_training, num_dev, replace=False)\n X_dev = X_train[mask]\n y_dev = y_train[mask]\n \n # Preprocessing: reshape the image data into rows\n X_train = np.reshape(X_train, (X_train.shape[0], -1))\n X_val = np.reshape(X_val, (X_val.shape[0], -1))\n X_test = np.reshape(X_test, (X_test.shape[0], -1))\n X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))\n \n # Normalize the data: subtract the mean image\n mean_image = np.mean(X_train, axis = 0)\n X_train -= mean_image\n X_val -= mean_image\n X_test -= mean_image\n X_dev -= mean_image\n \n # add bias dimension and transform into columns\n X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])\n X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])\n X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])\n X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])\n \n return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev\n\n\n# Invoke the above function to get our data.\nX_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()\nprint 'Train data shape: ', X_train.shape\nprint 'Train labels shape: ', y_train.shape\nprint 'Validation data shape: ', X_val.shape\nprint 'Validation labels shape: ', y_val.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape\nprint 'dev data shape: ', X_dev.shape\nprint 'dev labels shape: ', y_dev.shape",
"_____no_output_____"
]
],
[
[
"## Softmax Classifier\n\nYour code for this section will all be written inside **cs231n/classifiers/softmax.py**. \n",
"_____no_output_____"
]
],
[
[
"# First implement the naive softmax loss function with nested loops.\n# Open the file cs231n/classifiers/softmax.py and implement the\n# softmax_loss_naive function.\n\nfrom cs231n.classifiers.softmax import softmax_loss_naive\nimport time\n\n# Generate a random softmax weight matrix and use it to compute the loss.\nW = np.random.randn(3073, 10) * 0.0001\nloss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)\n\n# As a rough sanity check, our loss should be something close to -log(0.1).\nprint 'loss: %f' % loss\nprint 'sanity check: %f' % (-np.log(0.1))",
"_____no_output_____"
]
],
[
[
"## Inline Question 1:\nWhy do we expect our loss to be close to -log(0.1)? Explain briefly.**\n\n**Your answer:** *Fill this in*\n",
"_____no_output_____"
]
],
[
[
"# Complete the implementation of softmax_loss_naive and implement a (naive)\n# version of the gradient that uses nested loops.\nloss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)\n\n# As we did for the SVM, use numeric gradient checking as a debugging tool.\n# The numeric gradient should be close to the analytic gradient.\nfrom cs231n.gradient_check import grad_check_sparse\nf = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]\ngrad_numerical = grad_check_sparse(f, W, grad, 10)\n\n# similar to SVM case, do another gradient check with regularization\nloss, grad = softmax_loss_naive(W, X_dev, y_dev, 1e2)\nf = lambda w: softmax_loss_naive(w, X_dev, y_dev, 1e2)[0]\ngrad_numerical = grad_check_sparse(f, W, grad, 10)",
"_____no_output_____"
],
[
"# Now that we have a naive implementation of the softmax loss function and its gradient,\n# implement a vectorized version in softmax_loss_vectorized.\n# The two versions should compute the same results, but the vectorized version should be\n# much faster.\ntic = time.time()\nloss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.00001)\ntoc = time.time()\nprint 'naive loss: %e computed in %fs' % (loss_naive, toc - tic)\n\nfrom cs231n.classifiers.softmax import softmax_loss_vectorized\ntic = time.time()\nloss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.00001)\ntoc = time.time()\nprint 'vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)\n\n# As we did for the SVM, we use the Frobenius norm to compare the two versions\n# of the gradient.\ngrad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')\nprint 'Loss difference: %f' % np.abs(loss_naive - loss_vectorized)\nprint 'Gradient difference: %f' % grad_difference",
"_____no_output_____"
],
[
"# Use the validation set to tune hyperparameters (regularization strength and\n# learning rate). You should experiment with different ranges for the learning\n# rates and regularization strengths; if you are careful you should be able to\n# get a classification accuracy of over 0.35 on the validation set.\nfrom cs231n.classifiers import Softmax\nresults = {}\nbest_val = -1\nbest_softmax = None\nlearning_rates = [1e-7, 5e-7]\nregularization_strengths = [5e4, 1e8]\n\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained softmax classifer in best_softmax. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n \n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val",
"_____no_output_____"
],
[
"# evaluate on test set\n# Evaluate the best softmax on test set\ny_test_pred = best_softmax.predict(X_test)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint 'softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )",
"_____no_output_____"
],
[
"# Visualize the learned weights for each class\nw = best_softmax.W[:-1,:] # strip out the bias\nw = w.reshape(32, 32, 3, 10)\n\nw_min, w_max = np.min(w), np.max(w)\n\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor i in xrange(10):\n plt.subplot(2, 5, i + 1)\n \n # Rescale the weights to be between 0 and 255\n wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)\n plt.imshow(wimg.astype('uint8'))\n plt.axis('off')\n plt.title(classes[i])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7327a18ae881301239d2e1bebbbcfa40d512c74 | 23,794 | ipynb | Jupyter Notebook | notebooks/old/v3_basic.ipynb | marcmartinezruiz/thesis | 6af4abe0b170c7c25255f1611c205723af0b64bf | [
"Zlib"
] | null | null | null | notebooks/old/v3_basic.ipynb | marcmartinezruiz/thesis | 6af4abe0b170c7c25255f1611c205723af0b64bf | [
"Zlib"
] | null | null | null | notebooks/old/v3_basic.ipynb | marcmartinezruiz/thesis | 6af4abe0b170c7c25255f1611c205723af0b64bf | [
"Zlib"
] | null | null | null | 38.689431 | 361 | 0.565857 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7327ff7983330d89cca327bbfb2ea3729428ce0 | 32,757 | ipynb | Jupyter Notebook | ipynb/hmdp.ipynb | ckling/hmdp | 183d4c346e7e5698de5b6a50435514adea1e1a2c | [
"Apache-2.0"
] | 2 | 2017-08-10T10:39:27.000Z | 2021-03-10T07:34:07.000Z | ipynb/hmdp.ipynb | ckling/hmdp | 183d4c346e7e5698de5b6a50435514adea1e1a2c | [
"Apache-2.0"
] | null | null | null | ipynb/hmdp.ipynb | ckling/hmdp | 183d4c346e7e5698de5b6a50435514adea1e1a2c | [
"Apache-2.0"
] | 1 | 2019-03-22T13:50:59.000Z | 2019-03-22T13:50:59.000Z | 49.258647 | 275 | 0.48411 | [
[
[
"# HMDP Topic Model IPython Wrapper\n\nThis is a Python class which wraps the Java binaries from the HMDP topic model from the PROMOSS topic modelling toolbox. The *promoss.jar* is expected to be in *../promoss.jar*.\n\n## HMDP class\n\nThe HMDP class contains all the methods required to run the HMDP topic model. \n\n### Mandatory parameters\nIt takes two parameters as mandatory parameters:\n* directory \t\tString. Gives the directory of the texts.txt and groups.txt file.\n* meta_params\t\tString. Specifies the metadata types and gives the desired clustering. Types of metadata are given separated by semicolons (and correspond to the number of different metadata in the meta.txt file. Possible datatypes are:\n * G\tGeographical coordinates. The number of desired clusters is specified in brackets, i.e. G(1000) will cluster the documents into 1000 clusters based on the geographical coordinates. (Technical detail: we use EM to fit a mixture of fisher distributions.)\n * T\tUNIX timestamps (in seconds). The number of clusters (based on binning) is given in brackets, and there can be multiple clusterings based on a binning on the timeline or temporal cycles. This is indicated by a letter followed by the number of desired clusters:\n * L\tBinning based on the timeline. Example: L1000 gives 1000 bins.\n * Y\tBinning based on the yearly cycle. Example: L1000 gives 1000 bins.\n * M\tBinning based on the monthly cycle. Example: L1000 gives 1000 bins.\n * W\tBinning based on the weekly cycle. Example: L1000 gives 1000 bins.\n * D\tBinning based on the daily cycle. Example: L1000 gives 1000 bins.\n * O\tOrdinal values (numbers)\n * N\tNominal values (text strings)\n\n### Optional parameters\nAdditionally, optional parameters can be given. The most commonly used ones are *T, RUNS, processed, stemming, stopwords* and *language*.\n* T\t\t\tInteger. Number of truncated topics. Default: 100\n* RUNS\t\t\tInteger. Number of iterations the sampler will run. Default: 200\n* SAVE_STEP\t\tInteger. Number of iterations after which the learned paramters are saved. Default: 10\n* TRAINING_SHARE\t\tDouble. Gives the share of documents which are used for training (0 to 1). Default: 1\n* BATCHSIZE\t\tInteger. Batch size for topic estimation. Default: 128\n* BATCHSIZE_GROUPS\tInteger. Batch size for group-specific parameter estimation. Default: BATCHSIZE\n* BURNIN\t\t\tInteger. Number of iterations till the topics are updated. Default: 0\n* BURNIN_DOCUMENTS\tInteger. Gives the number of sampling iterations where the group-specific parameters are not updated yet. Default: 0\n* INIT_RAND\t\tDouble. Topic-word counts are initiatlised as INIT_RAND * RANDOM(). Default: 0\n* SAMPLE_ALPHA\t\tInteger. Every SAMPLE_ALPHAth document is used to estimate alpha_1. Default: 1\n* BATCHSIZE_ALPHA\tInteger. How many observations do we take before updating alpha_1. Default: 1000\n* MIN_DICT_WORDS\t\tInteger. If the words.txt file is missing, words.txt is created by using words which occur at least MIN_DICT_WORDS times in the corpus. Default: 100\n* save_prefix\t\tString. If given, this String is appended to all output files.\n* alpha_0\t\tDouble. Initial value of alpha_0. Default: 1\n* alpha_1\t\tDouble. Initial value of alpha_1. Default: 1\n* epsilon\t\tComma-separated double. Dirichlet prior over the weights of contexts. Comma-separated double values, with dimensionality equal to the number of contexts.\n* delta_fix \t\tIf set, delta is fixed and set to this value. Otherwise delta is learned during inference.\n* rhokappa\t\tDouble. Initial value of kappa, a parameter for the learning rate of topics. Default: 0.5\n* rhotau\t\t\tInteger. Initial value of tau, a parameter for the learning rate of topics. Default: 64\n* rhos\t\t\tInteger. Initial value of s, a parameter for the learning rate of topics. Default: 1\n* rhokappa_document\tDouble. Initial value of kappa, a parameter for the learning rate of the document-topic distribution. Default: kappa\n* rhotau_document\tInteger. Initial value of tau, a parameter for the learning rate of the document-topic distribution. Default: tau\n* rhos_document\t\tInteger. Initial value of tau, a parameter for the learning rate of the document-topic distribution. Default: rhos\n* rhokappa_group\t\tDouble. Initial value of kappa, a parameter for the learning rate of the group-topic distribution. Default: kappa\n* rhotau_group\t\tInteger. Initial value of tau, a parameter for the learning rate of the group-topic distribution. Default: tau\n* rhos_group\t\tInteger. Initial value of tau, a parameter for the learning rate of the group-topic distribution. Default: rhos\n* processed\t\tBoolean. Tells if the text is already processed, or if words should be split with complex regular expressions. Otherwise split by spaces. Default: true.\n* stemming\t\tBoolean. Activates word stemming in case no words.txt/wordsets file is given. Default: false\n* stopwords\t\tBoolean. Activates stopword removal in case no words.txt/wordsets file is given. Default: false\n* language\t\tString. Currently \"en\" and \"de\" are available languages for stemming. Default: \"en\"\n* store_empty\t\tBoolean. Determines if empty documents should be omitted in the final document-topic matrix or if the topic distribution should be predicted using the context. Default: True\n* topk\t\t\tInteger. Set the number of top words returned in the topktopics file of the output. Default: 100\n* gamma\t\t\tDouble. Initial scaling parameter of the top-level Dirichlet process. Default: 1\n* learn_gamma\t\tBoolean. Should gamma be learned during inference? Default: True\n\n### Provided methods\n\n#### run()\nThis method executes the java binaries with the parameters specified in the initialisation step.\n\n#### check_run()\nChecks if the HMDP model was already trained.\n\n*Output: Boolean\n\n#### map_from_JSON()\nCreates HTML files with interactive maps showing the topic probabilities per cluster for all geographical metadata.\n<img src=\"img/screenshot_map.png\" style=\"height: 300px\" />\n*Input: \n * color: Gives the color of the markers (hexadecimal, e.g. #aa23cc). Default: auto (changing colours)\n * marker_size: Integer, size of markers. Default: 10\n * show_map: Show map in the IPython notebook. Warning, this can crash your browser. Default: false\n\n\n#### plot_zeta()\nShow metadata (feature) weights.\n\n#### plot_time()\nPlot temporal distribution(s) of topic probabilities for a given topic.\n\n* Input: ID of a topic\n\n#### plot_ordinal()\nPlot distribution of topic probabilities over ordinal metadata variables for a given topic.\n* Input: ID of a topic\n\n\n#### get_topics()\n* Output: Returns the top-k words (k given by parameter -topk of the HMDP class) in a pandas DataFrame.",
"_____no_output_____"
]
],
[
[
"# coding: utf-8\n%matplotlib inline\n\nimport json\nimport io, os, shutil, time, datetime\nimport subprocess\nimport folium\nfrom IPython.core.display import HTML\nfrom IPython.display import IFrame, display\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nclass HMDP(object):\n directory = \"\";\n meta_params = \"\";\n T=100\n RUNS=200\n SAVE_STEP=10\n TRAINING_SHARE=1.0\n BATCHSIZE=128\n BATCHSIZE_GROUPS=128\n BURNIN=0\n BURNIN_DOCUMENTS=0\n INIT_RAND=0\n SAMPLE_ALPHA=1\n BATCHSIZE_ALPHA=1000\n MIN_DICT_WORDS=100\n alpha_0=1\n alpha_1=1\n epsilon=\"none\"\n delta_fix=\"none\"\n rhokappa=0.5\n rhotau=64\n rhos=1\n rhokappa_document=0.5\n rhotau_document=64\n rhos_document=1\n rhokappa_group=0.5\n rhotau_group=64\n rhos_group=1\n processed=True\n stemming=False\n stopwords=False\n language=\"en\"\n store_empty=True\n topk=100\n gamma = 1\n learn_gamma = True;\n \n def __init__(self,\n directory,\n meta_params,\n T=100,\n RUNS=200,\n SAVE_STEP=10,\n TRAINING_SHARE=1.0,\n BATCHSIZE=128,\n BATCHSIZE_GROUPS=128,\n BURNIN=0,\n BURNIN_DOCUMENTS=0,\n INIT_RAND=0,\n SAMPLE_ALPHA=1,\n BATCHSIZE_ALPHA=1000,\n MIN_DICT_WORDS=100,\n alpha_0=1,\n alpha_1=1,\n epsilon=\"none\",\n delta_fix=\"none\",\n rhokappa=0.5,\n rhotau=64,\n rhos=1,\n rhokappa_document=0.5,\n rhotau_document=64,\n rhos_document=1,\n rhokappa_group=0.5,\n rhotau_group=64,\n rhos_group=1,\n processed=True,\n stemming=False,\n stopwords=False,\n language=\"en\",\n store_empty=True,\n topk=100,\n gamma = 1,\n learn_gamma = True\n ):\n self.directory = directory\n self.meta_params = meta_params\n self.T = T\n self.RUNS = RUNS\n self.SAVE_STEP = SAVE_STEP\n self.TRAINING_SHARE = TRAINING_SHARE\n self.BATCHSIZE = BATCHSIZE\n self.BATCHSIZE_GROUPS = BATCHSIZE_GROUPS\n self.BURNIN = BURNIN\n self.BURNIN_DOCUMENTS = BURNIN_DOCUMENTS\n self.INIT_RAND = INIT_RAND\n self.SAMPLE_ALPHA = SAMPLE_ALPHA\n self.BATCHSIZE_ALPHA = BATCHSIZE_ALPHA\n self.MIN_DICT_WORDS = MIN_DICT_WORDS\n self.alpha_0 = alpha_0\n self.alpha_1 = alpha_1\n self.epsilon = epsilon\n self.delta_fix = delta_fix\n self.rhokappa = rhokappa\n self.rhotau = rhotau\n self.rhos = rhos\n self.rhokappa_document = rhokappa_document\n self.rhotau_document = rhotau_document\n self.rhos_document = rhos_document\n self.rhokappa_group = rhokappa_group\n self.rhotau_group = rhotau_group\n self.rhos_group = rhos_group\n self.processed = processed\n self.stemming = stemming\n self.stopwords = stopwords\n self.language = language\n self.store_empty = store_empty\n self.topk = topk\n self.gamma = gamma\n self.learn_gamma = learn_gamma\n\n def run(self, RUNS = None):\n if RUNS == None:\n RUNS = self.RUNS;\n \n print(\"Running HMDP topic model... (please wait)\");\n\n if os.path.isdir(directory+\"/output_HMDP\"):\n shutil.rmtree(directory+\"/output_HMDP\") \n if os.path.isdir(self.directory+\"/cluster_desc\"):\n shutil.rmtree(self.directory+\"/cluster_desc\") \n\n if os.path.isfile(self.directory+\"/groups\"):\n os.remove(self.directory+\"/groups\")\n if os.path.isfile(self.directory+\"/groups.txt\"):\n os.remove(self.directory+\"/groups.txt\")\n if os.path.isfile(self.directory+\"/text.txt\"):\n os.remove(self.directory+\"/text.txt\")\n if os.path.isfile(self.directory+\"/words.txt\"):\n os.remove(self.directory+\"/words.txt\")\n if os.path.isfile(self.directory+\"/wordsets\"):\n os.remove(self.directory+\"/wordsets\")\n\n if not os.path.isfile(\"../promoss.jar\"):\n print(\"Could not find ../promoss.jar. Exit\")\n return;\n try:\n with subprocess.Popen(['java', '-jar', '../promoss.jar', \n '-directory', self.directory, \n '-meta_params', self.meta_params, \n '-T',str(self.T),\n '-RUNS',str(self.RUNS),\n '-SAVE_STEP',str(self.SAVE_STEP),\n '-TRAINING_SHARE',str(self.TRAINING_SHARE),\n '-BATCHSIZE',str(self.BATCHSIZE),\n '-BATCHSIZE_GROUPS',str(self.BATCHSIZE_GROUPS),\n '-BURNIN',str(self.BURNIN),\n '-BURNIN_DOCUMENTS',str(self.BURNIN_DOCUMENTS),\n '-INIT_RAND',str(self.INIT_RAND),\n '-SAMPLE_ALPHA',str(self.SAMPLE_ALPHA),\n '-BATCHSIZE_ALPHA',str(self.BATCHSIZE_ALPHA),\n '-MIN_DICT_WORDS',str(self.MIN_DICT_WORDS),\n '-alpha_0',str(self.alpha_0),\n '-alpha_1',str(self.alpha_1),\n '-epsilon',str(self.epsilon),\n '-delta_fix',str(self.delta_fix),\n '-rhokappa',str(self.rhokappa),\n '-rhotau',str(self.rhotau),\n '-rhos',str(self.rhos),\n '-rhokappa_document',str(self.rhokappa_document),\n '-rhotau_document',str(self.rhotau_document),\n '-rhos_document',str(self.rhos_document),\n '-rhokappa_group',str(self.rhokappa_group),\n '-rhotau_group',str(self.rhotau_group),\n '-rhos_group',str(self.rhos_group),\n '-processed',str(self.processed),\n '-stemming',str(self.stemming),\n '-stopwords',str(self.stopwords),\n '-language',str(self.language),\n '-store_empty',str(self.store_empty),\n '-topk',str(self.topk),\n '-gamma',str(self.gamma),\n '-learn_gamma',str(self.learn_gamma)\n ], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as p: \n\n for line in p.stdout:\n line = str(line)[2:-1].replace(\"\\\\n\",\"\").replace(\"\\\\t\",\" \")\n print(line, end='\\n');\n for line in p.stderr:\n line = str(line)[2:-1].replace(\"\\\\n\",\"\").replace(\"\\\\t\",\" \")\n print(line, end='\\n');\n\n \n #rc = process.poll();\n #print(\"Finished with return code \" + str(rc));\n except subprocess.CalledProcessError as e:\n print(e.returncode)\n print(e.output)\n\n def check_run(self):\n if os.path.isdir(self.directory + \"/output_HMDP/\" + str(self.RUNS)):\n return True;\n else:\n print(\"Please call run() first\");\n return False;\n \n \n #returns the command which we used to call the java file\n def get_command(self):\n args = ['java', '-jar', '../promoss.jar', \n '-directory', self.directory, \n '-meta_params', self.meta_params, \n '-T',str(self.T),\n '-RUNS',str(self.RUNS),\n '-SAVE_STEP',str(self.SAVE_STEP),\n '-TRAINING_SHARE',str(self.TRAINING_SHARE),\n '-BATCHSIZE',str(self.BATCHSIZE),\n '-BATCHSIZE_GROUPS',str(self.BATCHSIZE_GROUPS),\n '-BURNIN',str(self.BURNIN),\n '-BURNIN_DOCUMENTS',str(self.BURNIN_DOCUMENTS),\n '-INIT_RAND',str(self.INIT_RAND),\n '-SAMPLE_ALPHA',str(self.SAMPLE_ALPHA),\n '-BATCHSIZE_ALPHA',str(self.BATCHSIZE_ALPHA),\n '-MIN_DICT_WORDS',str(self.MIN_DICT_WORDS),\n '-alpha_0',str(self.alpha_0),\n '-alpha_1',str(self.alpha_1),\n '-epsilon',str(self.epsilon),\n '-delta_fix',str(self.delta_fix),\n '-rhokappa',str(self.rhokappa),\n '-rhotau',str(self.rhotau),\n '-rhos',str(self.rhos),\n '-rhokappa_document',str(self.rhokappa_document),\n '-rhotau_document',str(self.rhotau_document),\n '-rhos_document',str(self.rhos_document),\n '-rhokappa_group',str(self.rhokappa_group),\n '-rhotau_group',str(self.rhotau_group),\n '-rhos_group',str(self.rhos_group),\n '-processed',str(self.processed),\n '-stemming',str(self.stemming),\n '-stopwords',str(self.stopwords),\n '-language',str(self.language),\n '-store_empty',str(self.store_empty),\n '-topk',str(self.topk)\n ];\n return (\" \".join(args));\n \n #function to create topic maps based on JSON files by the HMDP topic model\n def map_from_JSON(self, base_folder = None, runs = None, color='auto', marker_size=10, show_map=False):\n\n if not self.check_run():\n return;\n \n if base_folder == None:\n base_folder = self.directory;\n if runs == None:\n runs = self.RUNS;\n \n topics = self.get_topics();\n k = 3;\n \n #we only create a map for the final run folder.\n #comment the next line to create maps for all folders\n final_run_folder = base_folder + \"/output_HMDP/\" + str(runs) +\"/\";\n\n #traverse folders containing geojson files\n folders = [x[0] for x in os.walk(final_run_folder) if x[0].endswith(\"_geojson\")];\n \n if (len(folders)==0):\n print(\"No geoJSON data found. Does your model contain geographical metadata?\");\n return;\n \n for folder in folders:\n print(\"opening folder \"+folder+\":\");\n\n #Create new folium map class\n f_map = folium.Map(location=[50, 6], tiles='Stamen Toner', zoom_start=1);\n\n #traverse geoJSON files\n files = [f for f in os.listdir(folder) if os.path.isfile(os.path.join(folder, f)) & f.endswith(\".geojson\")];\n \n topic_numbers = [\"\"]*len(files);\n \n for i in range(0,len(files)):\n topic_numbers[i] = int(files[i].split(\"_\")[1].split(\".\")[0]);\n \n files = [x for (y,x) in sorted(zip(topic_numbers,files))]\n \n i = 0;\n for file in files:\n print(\"processing \"+file+\" ...\");\n\n with open(folder+'/'+file) as f:\n geojson = json.load(f)\n\n icon_size = (14, 14)\n\n #name of the topic are the first three topic words\n name = \"Topic \"+str(i)+\": \"+\" \".join(topics.iloc[i][0:k]); \n #traverse geoJSON features\n feature_group = folium.FeatureGroup(name);\n for feature in geojson['features']:\n #we get position, colour, transparency from JSON\n lat, lon = feature['geometry']['coordinates'];\n if color == 'auto':\n fillColor = \"#\"+feature['properties']['fillColor'];\n else:\n fillColor = color;\n fillOpacity = feature['properties']['fillOpacity'];\n marker = folium.CircleMarker([lat, lon], \n fill_color=fillColor, \n fill_opacity=fillOpacity,\n color = \"none\",\n radius = marker_size)\n feature_group.add_child(marker);\n\n f_map.add_child(feature_group);\n f.close();\n i=i+1;\n\n #add layer control to activate/deactivate topics\n folium.LayerControl().add_to(f_map); \n #save map\n f_map.save(folder+'/topic_map.htm')\n print('created map in: '+folder+'/topic_map.htm');\n f_map._repr_html_();\n #show map only if wanted, can consume quite some memory\n if show_map:\n if not os.path.exists(\"tmp\"):\n os.makedirs(\"tmp\");\n\n f_map.save(\"tmp/\"+folder.split(\"/\")[-1]+\"_map.html\");\n display(IFrame(\"tmp/\"+folder.split(\"/\")[-1]+\"_map.html\",width=400, height=400));\n display(f_map._repr_png());\n display(HTML('<a href=\"file://'+folder+'/topic_map.htm'+'\" target=\"_blank\">Link to map of '+folder.split(\"/\")[-1].replace(\"_geojson\",\"\")+'</a>'));\n\n #plot topic proportions\n def plot_zeta(self, directory=None, RUNS=None):\n \n if not self.check_run():\n return;\n \n if directory == None:\n directory = self.directory;\n if RUNS == None:\n RUNS = self.RUNS;\n \n fig = plt.figure();\n \n zeta_file = self.directory + \"/output_HMDP/\" + str(RUNS) +\"/zeta\";\n \n df = pd.read_csv(zeta_file, header=None);\n zeta = df.iloc[[0]].values[0];\n print(zeta);\n \n plt.bar(range(0,len(zeta)),zeta);\n plt.xticks(range(0,len(zeta)));\n plt.xlabel(\"Features\");\n plt.ylabel(\"Feature weight\");\n plt.show();\n \n return(fig);\n \n #read topics ad DataFrame\n def get_topics(self, directory=None, RUNS=None):\n \n if not self.check_run():\n return;\n \n if directory == None:\n directory = self.directory;\n if RUNS == None:\n RUNS = self.RUNS;\n \n \n topic_file = self.directory + \"/output_HMDP/\" + str(RUNS) +\"/topktopic_words\";\n \n df = pd.read_csv(topic_file, header=None, sep=\" \");\n \n return(df);\n \n #plot topic probabilities over time\n def plot_time(self, topic_ID, directory=None, RUNS=None):\n \n if not self.check_run():\n return;\n \n if directory == None:\n directory = self.directory;\n if RUNS == None:\n RUNS = self.RUNS; \n \n topics = self.get_topics();\n k = min(3,len(topics.iloc[0]));\n \n #traverse folders containing time files\n time_files = [x for x in os.listdir(directory+\"/cluster_desc/\") if x.endswith(\"_L\")];\n\n \n figs = [];\n \n for time_file in time_files:\n \n time_file = directory+\"/cluster_desc/\"+time_file;\n \n cluster_number = int(time_file.split(\"/\")[-1][7:-2]);\n \n times = pd.read_csv(time_file, header=None, sep=\" \");\n times = times[1];\n \n #print(times);\n \n first_time = min(times);\n last_time = max(times); \n \n first_date = datetime.datetime.fromtimestamp(\n int(first_time)\n ).strftime('%d.%m.%Y'); \n last_date = datetime.datetime.fromtimestamp(\n int(last_time)\n ).strftime('%d.%m.%Y');\n\n fig = plt.figure();\n\n cluster_file = self.directory + \"/output_HMDP/\" + str(RUNS) +\"/clusters_\"+str(cluster_number);\n \n probabilities = pd.read_csv(cluster_file, header=None);\n topic_probabilities = probabilities[topic_ID];\n\n topic_probabilities = [x for (y,x) in sorted(zip(times,topic_probabilities))]\n times = sorted(times);\n\n \n #name of the topic are the first three topic words\n name = \"Topic \"+str(topic_ID)+\": \"+\" \".join(topics.iloc[topic_ID][0:k]);\n \n fig = plt.figure();\n \n #print(times)\n #print(topic_probabilities)\n \n plt.scatter(times, topic_probabilities);\n plt.xticks([first_time,last_time],[first_date,last_date]);\n plt.xlabel(\"Time\");\n plt.ylabel(\"Topic probability\");\n plt.legend([name]);\n plt.show();\n figs.append(fig);\n \n return(figs);\n \n #plot topic probabilities for ordinal data\n def plot_ordinal(self, topic_ID, directory=None, RUNS=None):\n \n if not self.check_run():\n return;\n \n if directory == None:\n directory = self.directory;\n if RUNS == None:\n RUNS = self.RUNS; \n \n topics = self.get_topics();\n k = min(3,len(topics.iloc[0]));\n \n #traverse folders containing time files\n cluster_files = [x for x in os.listdir(directory+\"/cluster_desc/\") if x.endswith(\"_O\")];\n \n figs = [];\n \n for cluster_file in cluster_files:\n \n cluster_file = directory+\"/cluster_desc/\"+cluster_file;\n \n cluster_number = int(cluster_file.split(\"/\")[-1][7:-2]);\n \n lines = pd.read_csv(cluster_file, header=None,names=[\"keys\",\"values\"], skiprows=1, sep=\" \");\n keys = lines[\"keys\"].values;\n values = lines[\"values\"].values;\n \n #sort by keys\n [keys,values] = list(zip(*sorted(zip(keys,values))));\n \n #print(times);\n \n fig = plt.figure();\n\n cluster_file = self.directory + \"/output_HMDP/\" + str(RUNS) +\"/clusters_\"+str(cluster_number);\n \n probabilities = pd.read_csv(cluster_file, header=None);\n topic_probabilities = probabilities[topic_ID];\n topic_probabilities = [x for (y,x) in sorted(zip(values,topic_probabilities))]\n \n #name of the topic are the first three topic words\n name = \"Topic \"+str(topic_ID)+\": \"+\" \".join(topics.iloc[topic_ID][0:k]);\n \n fig = plt.figure();\n \n #print(times)\n #print(topic_probabilities)\n \n value_array = [];\n value_array.append(values);\n value_array = [x for xs in value_array for x in xs];\n #print(value_array);\n #print(topic_probabilities);\n \n plt.scatter(value_array, topic_probabilities);\n plt.xticks(values,keys);\n plt.xticks(rotation=90)\n plt.xlabel(\"Category\");\n plt.ylabel(\"Topic probability\");\n plt.legend([name]);\n plt.show();\n figs.append(fig);\n \n return(figs);",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
e7328411b6a5660c5156f9b2fa45e439c3f5b087 | 7,245 | ipynb | Jupyter Notebook | user_defined_and_inbuilt_exceptions.ipynb | cj-asimov12/python_ds | b063e1047addc337af451566d93e3851615b4ef2 | [
"MIT"
] | null | null | null | user_defined_and_inbuilt_exceptions.ipynb | cj-asimov12/python_ds | b063e1047addc337af451566d93e3851615b4ef2 | [
"MIT"
] | null | null | null | user_defined_and_inbuilt_exceptions.ipynb | cj-asimov12/python_ds | b063e1047addc337af451566d93e3851615b4ef2 | [
"MIT"
] | null | null | null | 33.541667 | 924 | 0.563423 | [
[
[
"'''\n1. With the help of try and inbuilt exception, display the exception.\n'''\nimport sys\ntry:\n print(2/0)\nexcept:\n print(sys.exc_info())",
"(<class 'ZeroDivisionError'>, ZeroDivisionError('division by zero'), <traceback object at 0x000001C2EB182FC0>)\n"
],
[
"'''\n2. Take two user inputs and pass those input variables in a try block. If the user input is 0, then\nthrow the ZeroDivisionError exception.\n'''\n\na = int(input(\"Enter a number: \"))\nb = int(input(\"Enter another number: \"))\n\ntry:\n print(a/b)\nexcept:\n print(sys.exc_info())\n",
"Enter a number: 2\nEnter another number: 0\n(<class 'ZeroDivisionError'>, ZeroDivisionError('division by zero'), <traceback object at 0x000001C2EB1C7CC0>)\n"
],
[
"'''\n3. Import math package and with the help of math package, print – math.exp(50000). Now use\nthe inbuilt OverflowError exception on the math.exp(), and print the exception.\n'''\nimport math\n\nanswer = math.exp(50000)\nprint(answer)\n",
"_____no_output_____"
],
[
"'''\nNow use the inbuilt OverflowError exception on the math.exp(), and print the exception.\n'''\n\nimport math\n\ntry:\n answer = math.exp(50000)\n print(answer)\n\nexcept OverflowError as oe:\n print(\"(put value in range)\", oe)\n",
"(put value in range) math range error\n"
],
[
"'''\n4. Now install the ‘termcolor’ package in the anaconda prompt, and import that package to\nprovide the colors to the print statement in Jupyter Notebook.\n'''\nfrom termcolor import colored, cprint\n\ncprint(\"This text is using termcolor package.\", 'green')\n\ntext = colored('New color', 'yellow', attrs=['reverse', 'blink'])\nprint(text)\n",
"\u001b[32mThis text is using termcolor package.\u001b[0m\n\u001b[5m\u001b[7m\u001b[33mNew color\u001b[0m\n"
],
[
"'''\n5. Create your own exception with the help of class and functions.\n'''\nclass SalaryNotInRangeError(Exception):\n \"\"\"Exception raised for errors in the input salary.\n\n Attributes:\n salary -- input salary which caused the error\n message -- explanation of the error\n \"\"\"\n\n def __init__(self, salary, message=\"Salary is not in (5000, 15000) range\"):\n self.salary = salary\n self.message = message\n super().__init__(self.message)\n\n\nsalary = int(input(\"Enter salary amount: \"))\nif not 5000 < salary < 15000:\n raise SalaryNotInRangeError(salary)\n",
"Enter salary amount: 25000\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e732858803f0729ae83741545b2f5ec1baf241d6 | 31,330 | ipynb | Jupyter Notebook | homework01/homework_test_modules.ipynb | olegpolivin/Practical_DL | bcba606f90e0dafa95b9d7344118a91679353ec4 | [
"MIT"
] | null | null | null | homework01/homework_test_modules.ipynb | olegpolivin/Practical_DL | bcba606f90e0dafa95b9d7344118a91679353ec4 | [
"MIT"
] | null | null | null | homework01/homework_test_modules.ipynb | olegpolivin/Practical_DL | bcba606f90e0dafa95b9d7344118a91679353ec4 | [
"MIT"
] | null | null | null | 53.011844 | 123 | 0.551931 | [
[
[
"%run homework_modules.ipynb",
"_____no_output_____"
],
[
"import torch\nfrom torch.autograd import Variable\nimport numpy\nimport unittest",
"_____no_output_____"
],
[
"class TestLayers(unittest.TestCase):\n def test_Linear(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in, n_out = 2, 3, 4\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.Linear(n_in, n_out)\n custom_layer = Linear(n_in, n_out)\n custom_layer.W = torch_layer.weight.data.numpy()\n custom_layer.b = torch_layer.bias.data.numpy()\n\n layer_input = np.random.uniform(-10, 10, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-10, 10, (batch_size, n_out)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n \n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n # 3. check layer parameters grad\n custom_layer.accGradParameters(layer_input, next_layer_grad)\n weight_grad = custom_layer.gradW\n bias_grad = custom_layer.gradb\n torch_weight_grad = torch_layer.weight.grad.data.numpy()\n torch_bias_grad = torch_layer.bias.grad.data.numpy()\n self.assertTrue(np.allclose(torch_weight_grad, weight_grad, atol=1e-6))\n self.assertTrue(np.allclose(torch_bias_grad, bias_grad, atol=1e-6))\n\n def test_SoftMax(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.Softmax(dim=1)\n custom_layer = SoftMax()\n\n layer_input = np.random.uniform(-10, 10, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.random((batch_size, n_in)).astype(np.float32)\n next_layer_grad /= next_layer_grad.sum(axis=-1, keepdims=True)\n next_layer_grad = next_layer_grad.clip(1e-5,1.)\n next_layer_grad = 1. / next_layer_grad\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-5))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-5))\n \n def test_LogSoftMax(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.LogSoftmax(dim=1)\n custom_layer = LogSoftMax()\n\n layer_input = np.random.uniform(-10, 10, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.random((batch_size, n_in)).astype(np.float32)\n next_layer_grad /= next_layer_grad.sum(axis=-1, keepdims=True)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n def test_BatchNormalization(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 32, 16\n for _ in range(100):\n # layers initialization\n slope = np.random.uniform(0.01, 0.05)\n alpha = 0.9\n custom_layer = BatchNormalization(alpha)\n custom_layer.train()\n torch_layer = torch.nn.BatchNorm1d(n_in, eps=custom_layer.EPS, momentum=1.-alpha, affine=False)\n custom_layer.moving_mean = torch_layer.running_mean.numpy().copy()\n custom_layer.moving_variance = torch_layer.running_var.numpy().copy()\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n # please, don't increase `atol` parameter, it's garanteed that you can implement batch norm layer\n # with tolerance 1e-5\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-5))\n\n # 3. check moving mean\n self.assertTrue(np.allclose(custom_layer.moving_mean, torch_layer.running_mean.numpy()))\n # we don't check moving_variance because pytorch uses slightly different formula for it:\n # it computes moving average for unbiased variance (i.e var*N/(N-1))\n #self.assertTrue(np.allclose(custom_layer.moving_variance, torch_layer.running_var.numpy()))\n\n # 4. check evaluation mode\n custom_layer.moving_variance = torch_layer.running_var.numpy().copy()\n custom_layer.evaluate()\n custom_layer_output = custom_layer.updateOutput(layer_input)\n torch_layer.eval()\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n \n def test_Sequential(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n alpha = 0.9\n torch_layer = torch.nn.BatchNorm1d(n_in, eps=BatchNormalization.EPS, momentum=1.-alpha, affine=True)\n torch_layer.bias.data = torch.from_numpy(np.random.random(n_in).astype(np.float32))\n custom_layer = Sequential()\n bn_layer = BatchNormalization(alpha)\n bn_layer.moving_mean = torch_layer.running_mean.numpy().copy()\n bn_layer.moving_variance = torch_layer.running_var.numpy().copy()\n custom_layer.add(bn_layer)\n scaling_layer = ChannelwiseScaling(n_in)\n scaling_layer.gamma = torch_layer.weight.data.numpy()\n scaling_layer.beta = torch_layer.bias.data.numpy()\n custom_layer.add(scaling_layer)\n custom_layer.train()\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.backward(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-5))\n\n # 3. check layer parameters grad\n weight_grad, bias_grad = custom_layer.getGradParameters()[1]\n torch_weight_grad = torch_layer.weight.grad.data.numpy()\n torch_bias_grad = torch_layer.bias.grad.data.numpy()\n self.assertTrue(np.allclose(torch_weight_grad, weight_grad, atol=1e-6))\n self.assertTrue(np.allclose(torch_bias_grad, bias_grad, atol=1e-6))\n\n def test_Dropout(self):\n np.random.seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n p = np.random.uniform(0.3, 0.7)\n layer = Dropout(p)\n layer.train()\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n\n # 1. check layer output\n layer_output = layer.updateOutput(layer_input)\n self.assertTrue(np.all(np.logical_or(np.isclose(layer_output, 0), \n np.isclose(layer_output*(1.-p), layer_input))))\n\n # 2. check layer input grad\n layer_grad = layer.updateGradInput(layer_input, next_layer_grad)\n self.assertTrue(np.all(np.logical_or(np.isclose(layer_grad, 0), \n np.isclose(layer_grad*(1.-p), next_layer_grad))))\n\n # 3. check evaluation mode\n layer.evaluate()\n layer_output = layer.updateOutput(layer_input)\n self.assertTrue(np.allclose(layer_output, layer_input))\n\n # 4. check mask\n p = 0.0\n layer = Dropout(p)\n layer.train()\n layer_output = layer.updateOutput(layer_input)\n self.assertTrue(np.allclose(layer_output, layer_input))\n\n p = 0.5\n layer = Dropout(p)\n layer.train()\n layer_input = np.random.uniform(5, 10, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(5, 10, (batch_size, n_in)).astype(np.float32)\n layer_output = layer.updateOutput(layer_input)\n zeroed_elem_mask = np.isclose(layer_output, 0)\n layer_grad = layer.updateGradInput(layer_input, next_layer_grad) \n self.assertTrue(np.all(zeroed_elem_mask == np.isclose(layer_grad, 0)))\n\n # 5. dropout mask should be generated independently for every input matrix element, not for row/column\n batch_size, n_in = 1000, 1\n p = 0.8\n layer = Dropout(p)\n layer.train()\n\n layer_input = np.random.uniform(5, 10, (batch_size, n_in)).astype(np.float32)\n layer_output = layer.updateOutput(layer_input)\n self.assertTrue(np.sum(np.isclose(layer_output, 0)) != layer_input.size)\n\n layer_input = layer_input.T\n layer_output = layer.updateOutput(layer_input)\n self.assertTrue(np.sum(np.isclose(layer_output, 0)) != layer_input.size)\n def test_LeakyReLU(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n slope = np.random.uniform(0.01, 0.05)\n torch_layer = torch.nn.LeakyReLU(slope)\n custom_layer = LeakyReLU(slope)\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n def test_ELU(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n alpha = 1.0\n torch_layer = torch.nn.ELU(alpha)\n custom_layer = ELU(alpha)\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n def test_SoftPlus(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.Softplus()\n custom_layer = SoftPlus()\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n next_layer_grad = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n def test_ClassNLLCriterionUnstable(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.NLLLoss()\n custom_layer = ClassNLLCriterionUnstable()\n\n layer_input = np.random.uniform(0, 1, (batch_size, n_in)).astype(np.float32)\n layer_input /= layer_input.sum(axis=-1, keepdims=True)\n layer_input = layer_input.clip(custom_layer.EPS, 1. - custom_layer.EPS) # unifies input\n target_labels = np.random.choice(n_in, batch_size)\n target = np.zeros((batch_size, n_in), np.float32)\n target[np.arange(batch_size), target_labels] = 1 # one-hot encoding\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input, target)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(torch.log(layer_input_var), \n Variable(torch.from_numpy(target_labels), requires_grad=False))\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, target)\n torch_layer_output_var.backward()\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n def test_ClassNLLCriterion(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 4\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.NLLLoss()\n custom_layer = ClassNLLCriterion()\n\n layer_input = np.random.uniform(-5, 5, (batch_size, n_in)).astype(np.float32)\n layer_input = torch.nn.LogSoftmax(dim=1)(Variable(torch.from_numpy(layer_input))).data.numpy()\n target_labels = np.random.choice(n_in, batch_size)\n target = np.zeros((batch_size, n_in), np.float32)\n target[np.arange(batch_size), target_labels] = 1 # one-hot encoding\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input, target)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var, \n Variable(torch.from_numpy(target_labels), requires_grad=False))\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n\n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, target)\n torch_layer_output_var.backward()\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n \n def test_adam_optimizer(self):\n state = {} \n config = {'learning_rate': 1e-3, 'beta1': 0.9, 'beta2':0.999, 'epsilon':1e-8}\n variables = [[np.arange(10).astype(np.float64)]]\n gradients = [[np.arange(10).astype(np.float64)]]\n adam_optimizer(variables, gradients, config, state)\n self.assertTrue(np.allclose(state['m'][0], np.array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, \n 0.6, 0.7, 0.8, 0.9])))\n self.assertTrue(np.allclose(state['v'][0], np.array([0., 0.001, 0.004, 0.009, 0.016, 0.025, \n 0.036, 0.049, 0.064, 0.081])))\n self.assertTrue(state['t'] == 1)\n self.assertTrue(np.allclose(variables[0][0], np.array([0., 0.999, 1.999, 2.999, 3.999, 4.999, \n 5.999, 6.999, 7.999, 8.999])))\n adam_optimizer(variables, gradients, config, state)\n self.assertTrue(np.allclose(state['m'][0], np.array([0., 0.19, 0.38, 0.57, 0.76, 0.95, 1.14, \n 1.33, 1.52, 1.71])))\n self.assertTrue(np.allclose(state['v'][0], np.array([0., 0.001999, 0.007996, 0.017991, \n 0.031984, 0.049975, 0.071964, 0.097951, \n 0.127936, 0.161919])))\n self.assertTrue(state['t'] == 2)\n self.assertTrue(np.allclose(variables[0][0], np.array([0., 0.998, 1.998, 2.998, 3.998, 4.998, \n 5.998, 6.998, 7.998, 8.998])))\n \nsuite = unittest.TestLoader().loadTestsFromTestCase(TestLayers)\nunittest.TextTestRunner(verbosity=2).run(suite)",
"test_BatchNormalization (__main__.TestLayers) ... ok\ntest_ClassNLLCriterion (__main__.TestLayers) ... ok\ntest_ClassNLLCriterionUnstable (__main__.TestLayers) ... ok\ntest_Dropout (__main__.TestLayers) ... ok\ntest_ELU (__main__.TestLayers) ... ok\ntest_LeakyReLU (__main__.TestLayers) ... ok\ntest_Linear (__main__.TestLayers) ... ok\ntest_LogSoftMax (__main__.TestLayers) ... ok\ntest_Sequential (__main__.TestLayers) ... ok\ntest_SoftMax (__main__.TestLayers) ... ok\ntest_SoftPlus (__main__.TestLayers) ... ok\ntest_adam_optimizer (__main__.TestLayers) ... ok\n\n----------------------------------------------------------------------\nRan 12 tests in 0.683s\n\nOK\n"
],
[
"class TestAdvancedLayers(unittest.TestCase):\n def test_Conv2d(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in, n_out = 2, 3, 4\n h,w = 5,6\n kern_size = 3\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.Conv2d(n_in, n_out, kern_size, padding=1)\n custom_layer = Conv2d(n_in, n_out, kern_size)\n custom_layer.W = torch_layer.weight.data.numpy() # [n_out, n_in, kern, kern]\n custom_layer.b = torch_layer.bias.data.numpy()\n\n layer_input = np.random.uniform(-1, 1, (batch_size, n_in, h,w)).astype(np.float32)\n next_layer_grad = np.random.uniform(-1, 1, (batch_size, n_out, h, w)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n \n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n \n # 3. check layer parameters grad\n custom_layer.accGradParameters(layer_input, next_layer_grad)\n weight_grad = custom_layer.gradW\n bias_grad = custom_layer.gradb\n torch_weight_grad = torch_layer.weight.grad.data.numpy()\n torch_bias_grad = torch_layer.bias.grad.data.numpy()\n #m = ~np.isclose(torch_weight_grad, weight_grad, atol=1e-5)\n self.assertTrue(np.allclose(torch_weight_grad, weight_grad, atol=1e-6, ))\n self.assertTrue(np.allclose(torch_bias_grad, bias_grad, atol=1e-6))\n \n def test_MaxPool2d(self):\n np.random.seed(42)\n torch.manual_seed(42)\n\n batch_size, n_in = 2, 3\n h,w = 4,6\n kern_size = 2\n for _ in range(100):\n # layers initialization\n torch_layer = torch.nn.MaxPool2d(kern_size)\n custom_layer = MaxPool2d(kern_size)\n\n layer_input = np.random.uniform(-10, 10, (batch_size, n_in, h,w)).astype(np.float32)\n next_layer_grad = np.random.uniform(-10, 10, (batch_size, n_in, \n h // kern_size, w // kern_size)).astype(np.float32)\n\n # 1. check layer output\n custom_layer_output = custom_layer.updateOutput(layer_input)\n layer_input_var = Variable(torch.from_numpy(layer_input), requires_grad=True)\n torch_layer_output_var = torch_layer(layer_input_var)\n self.assertTrue(np.allclose(torch_layer_output_var.data.numpy(), custom_layer_output, atol=1e-6))\n \n # 2. check layer input grad\n custom_layer_grad = custom_layer.updateGradInput(layer_input, next_layer_grad)\n torch_layer_output_var.backward(torch.from_numpy(next_layer_grad))\n torch_layer_grad_var = layer_input_var.grad\n self.assertTrue(np.allclose(torch_layer_grad_var.data.numpy(), custom_layer_grad, atol=1e-6))\n\n\nsuite = unittest.TestLoader().loadTestsFromTestCase(TestAdvancedLayers)\nunittest.TextTestRunner(verbosity=2).run(suite)",
"test_Conv2d (__main__.TestAdvancedLayers) ... ok\ntest_MaxPool2d (__main__.TestAdvancedLayers) ... ok\n\n----------------------------------------------------------------------\nRan 2 tests in 0.375s\n\nOK\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e7329ba4a7f2c92b9eb36f8db28161a9b2617f18 | 11,889 | ipynb | Jupyter Notebook | 5. Loading and Visualizing Network Data (Student).ipynb | sbrown97/network-analysis | 1266e1f524edde41030aca7ceb483975733866b4 | [
"MIT"
] | null | null | null | 5. Loading and Visualizing Network Data (Student).ipynb | sbrown97/network-analysis | 1266e1f524edde41030aca7ceb483975733866b4 | [
"MIT"
] | null | null | null | 5. Loading and Visualizing Network Data (Student).ipynb | sbrown97/network-analysis | 1266e1f524edde41030aca7ceb483975733866b4 | [
"MIT"
] | null | null | null | 25.958515 | 319 | 0.571284 | [
[
[
"import pandas as pd\nimport networkx as nx\nimport os\nimport numpy as np\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom circos import CircosPlot\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Tables to Networks, Networks to Tables\n\nNetworks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values.\n\nStoring the network data as a single massive adjacency table, with node attributes repeated on each row, can get unwieldy, especially if the graph is large, or grows to be so. One way to get around this is to store two files: one with node data and node attributes, and one with edge data and edge attributes. \n\nThe Divvy bike sharing dataset is one such example of a network data set that has been stored as such.",
"_____no_output_____"
],
[
"# Loading Node Lists and Adjacency Lists\n\nLet's use the Divvy bike sharing data set as a starting point. The Divvy data set is comprised of the following data:\n\n- Stations and metadata (like a node list with attributes saved)\n- Trips and metadata (like an edge list with attributes saved)\n\nThe `README.txt` file in the Divvy directory should help orient you around the data.",
"_____no_output_____"
]
],
[
[
"# This block of code checks to make sure that a particular directory is present.\nif \"divvy_2013\" not in os.listdir('datasets/'):\n print('Unzip the divvy_2013.zip file in the datasets folder.')",
"_____no_output_____"
],
[
"stations = pd.read_csv('datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], index_col='id', encoding='utf-8')\nstations",
"_____no_output_____"
],
[
"trips = pd.read_csv('datasets/divvy_2013/Divvy_Trips_2013.csv', \n parse_dates=['starttime', 'stoptime'], \n index_col=['trip_id'])\ntrips = trips.sort()\ntrips",
"_____no_output_____"
]
],
[
[
"At this point, we have our `stations` and `trips` data loaded into memory. \n\nHow we construct the graph depends on the kind of questions we want to answer, which makes the definition of the \"unit of consideration\" (or the entities for which we are trying to model their relationships) is extremely important. \n\nLet's try to answer the question: \"What are the most popular trip paths?\" In this case, the bike station is a reasonable \"unit of consideration\", so we will use the bike stations as the nodes. \n\nTo start, let's initialize an directed graph `G`.",
"_____no_output_____"
]
],
[
[
"G = nx.DiGraph()",
"_____no_output_____"
]
],
[
[
"Then, let's iterate over the `stations` DataFrame, and add in the node attributes.",
"_____no_output_____"
]
],
[
[
"for r, d in stations.iterrows(): # call the pandas DataFrame row-by-row iterator\n G.add_node(r, attr_dict=d.to_dict()) ",
"_____no_output_____"
]
],
[
[
"In order to answer the question of \"which stations are important\", we need to specify things a bit more. Perhaps a measure such as **betweenness centrality** or **degree centrality** may be appropriate here.\n\nThe naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time :-). Alternatively, I would suggest doing a `pandas` `groupby`.",
"_____no_output_____"
]
],
[
[
"# # Run the following code at your own risk :)\n# for r, d in trips.iterrows():\n# start = d['from_station_id']\n# end = d['to_station_id']\n# if (start, end) not in G.edges():\n# G.add_edge(start, end, count=1)\n# else:\n# G.edge[start][end]['count'] += 1",
"_____no_output_____"
],
[
"for (start, stop), d in trips.groupby(['from_station_id', 'to_station_id']):\n G.add_edge(start, stop, count=len(d))",
"_____no_output_____"
]
],
[
[
"### Exercise\n\nFlex your memory muscles: can you make a scatter plot of the distribution of the number edges that have a certain number of trips? \n\nThe key should be the number of trips between two nodes, and the value should be the number of edges that have that number of trips.",
"_____no_output_____"
]
],
[
[
"from collections import Counter\n# Count the number of edges that have x trips recorded on them.\ntrip_count_distr = ______________________________\n\n# Then plot the distribution of these\nplt.scatter(_______________, _______________, alpha=0.1)\nplt.yscale('log')\nplt.xlabel('num. of trips')\nplt.ylabel('num. of edges')",
"_____no_output_____"
]
],
[
[
"### Exercise\n\nCreate a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. `count >= 100`) are left.",
"_____no_output_____"
]
],
[
[
"# Filter the edges to just those with more than 100 trips.\nG_filtered = G.copy()\nfor u, v, d in G.edges(data=True):\n # Fill in your code here.\n \nlen(G_filtered.edges())",
"_____no_output_____"
]
],
[
[
"Let's now try drawing the graph.",
"_____no_output_____"
],
[
"### Exercise\n\nUse `nx.draw(my_graph)` to draw the filtered graph to screen.",
"_____no_output_____"
]
],
[
[
"# Fill in your code here.\n",
"_____no_output_____"
]
],
[
[
"### Exercise\n\nTry visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the **original** graph, but plot only the **filtered** graph edges.",
"_____no_output_____"
]
],
[
[
"nodes = sorted(_________________, key=lambda x:_________________)\nedges = ___________\nedgeprops = dict(alpha=0.1)\nnodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes)) \nfig = plt.figure(figsize=(6,6))\nax = fig.add_subplot(111)\nc = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)\nc.draw()\nplt.savefig('images/divvy.png', dpi=300)",
"_____no_output_____"
]
],
[
[
"In this visual, nodes are sorted from highest connectivity to lowest connectivity in the **unfiltered** graph.\n\nEdges represent only trips that were taken >100 times between those two nodes.\n\nSome things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local \"high traffic\" connections between stations of low connectivity as well (nodes in the top-right quadrant).",
"_____no_output_____"
],
[
"# Saving NetworkX Graph Files\n\nNetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.\n\nTo write to disk: \n\n nx.write_gpickle(G, handle)\n\nTo load from disk:\n \n G = nx.read_gpickle(handle)",
"_____no_output_____"
]
],
[
[
"nx.write_gpickle(G, 'datasets/divvy_2013/divvy_graph.pkl')",
"_____no_output_____"
],
[
"G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e732a1e15350eb6a4c40b5cd906ad1e2ce24c832 | 8,163 | ipynb | Jupyter Notebook | examples.ipynb | accenturelabs/blackhat-arsenal-2016 | bda3c4f4d9e6932cf7910a92bf6aa071d178c86b | [
"BSD-3-Clause"
] | 3 | 2016-09-02T12:57:24.000Z | 2018-05-08T15:42:07.000Z | examples.ipynb | accenturelabs/blackhat-arsenal-2016 | bda3c4f4d9e6932cf7910a92bf6aa071d178c86b | [
"BSD-3-Clause"
] | null | null | null | examples.ipynb | accenturelabs/blackhat-arsenal-2016 | bda3c4f4d9e6932cf7910a92bf6aa071d178c86b | [
"BSD-3-Clause"
] | null | null | null | 22.182065 | 190 | 0.514027 | [
[
[
"# SIEM Data Exploration with Spark\n\n## Data Formatting\n<hr>\n\n### Data Source\nData used in this example is from a market leading SIEM\n\n### File Names\nIndividual CSV files are converted from CSV to Parquet files (see `architecture.pdf` for more info) then saved by hour with the name format `YYYY-MM-DD-HH`\n\n### Field Names\nField names match from the header information from the original CSV\n\n## Config Parameters\n<hr>\nSet these variables to connect to your HDFS cluster",
"_____no_output_____"
]
],
[
[
"# HDFS config parameters\nhdfsNameNode = \"10.0.0.1\"\nhdfsPort = \"8020\"",
"_____no_output_____"
]
],
[
[
"## Import Spark Libraries\n<hr>",
"_____no_output_____"
]
],
[
[
"# Import libraries for PySpark/SparkSQL\nfrom pyspark import SQLContext\nfrom pyspark.sql.functions import *\n# Create a SQLContext to use for SQL queries\nsq = SQLContext(sc)ß",
"_____no_output_____"
]
],
[
[
"## Example 1\n<hr>\n\n### Network communication lookup, from source subnet to multiple destinations\n\n#### SQL Example: \n```WHERE sourceAddress CONTAINS \"55.54.53.\" AND ( ( destinationAddress = \"10.0.0.50\" ) OR ( destinationAddress = \"10.0.0.51\" ) OR ( destinationAddress = \"10.0.0.52\" ) )```",
"_____no_output_____"
]
],
[
[
"%%time\n### One Day\ndata1 = sq.read.parquet(\"hdfs://\"+hdfsNameNode+\":\"+hdfsPort+\"/data/2016-06-01*\")\npdf1 = data1.filter(data1.sourceAddress.startswith(\"55.54.53.\")) \\\n .filter(\"destinationAddress = '10.0.0.50' OR destinationAddress = '10.0.0.51' OR destinationAddress = '10.0.0.52'\") \\\n .toPandas()",
"_____no_output_____"
],
[
"### Number of results\nlen(pdf1)",
"_____no_output_____"
],
[
"### Display the first 10 results\npdf1.head(10)",
"_____no_output_____"
],
[
"%%time\n### One Week\ndata2 = sq.read.parquet(\"hdfs://\"+hdfsNameNode+\":\"+hdfsPort+\"/data/2016-06-0[1-7]*\")\npdf2 = data2.filter(data2.sourceAddress.startswith(\"55.54.53.\")) \\\n .filter(\"destinationAddress = '10.0.0.50' OR destinationAddress = '10.0.0.51' OR destinationAddress = '10.0.0.52'\") \\\n .toPandas()",
"_____no_output_____"
],
[
"### Number of results\nlen(pdf2)",
"_____no_output_____"
],
[
"### Display the first 10 results\npdf2.head(10)",
"_____no_output_____"
]
],
[
[
"## Example 2\n<hr>\n\n### Account failed logon attempts lookup, using startswith keyword\n\n#### SQL Example:\n```WHERE destinationUserName startswith \"ads.\" AND categoryOutcome = \"/Failure\"```",
"_____no_output_____"
]
],
[
[
"%%time\n### One Day\ndata5 = sq.read.parquet(\"hdfs://\"+hdfsNameNode+\":\"+hdfsPort+\"/data/2016-06-01*\")\npdf5 = data5.filter(data5.destinationUserName.startswith(\"ads.\")) \\\n .filter(data5.categoryOutcome == \"/Failure\") \\\n .toPandas()",
"_____no_output_____"
],
[
"### Number of results\nlen(pdf5)",
"_____no_output_____"
],
[
"### Display the first 10 results\npdf5.head(10)",
"_____no_output_____"
],
[
"%%time\n### One Week\ndata6 = sq.read.parquet(\"hdfs://\"+hdfsNameNode+\":\"+hdfsPort+\"/data/2016-06-0[1-7]*\")\npdf6 = data6.filter(data6.destinationUserName.startswith(\"ads.\")) \\\n .filter(data6.categoryOutcome == \"/Failure\") \\\n .toPandas()",
"_____no_output_____"
],
[
"### Number of results\nlen(pdf6)",
"_____no_output_____"
],
[
"### Display the first 10 results\npdf6.head(10)",
"_____no_output_____"
]
],
[
[
"## Example 3\n<hr>\n\n### Malware infection lookup, particular keyword in message field\n\n#### SQL Example:\n```WHERE deviceVendor=\"Symantec\" AND message contains \"exe\"```",
"_____no_output_____"
]
],
[
[
"%%time\n### One Day\ndata3 = sq.read.parquet(\"hdfs://\"+hdfsNameNode+\":\"+hdfsPort+\"/data/2016-06-01*\")\npdf3 = data3.filter(data3.deviceVendor == \"Symantec\") \\\n .filter(data3.message.like(\"%exe%\")) \\\n .toPandas()",
"_____no_output_____"
],
[
"### Number of results\nlen(pdf3)",
"_____no_output_____"
],
[
"### Display the first 10 results\npdf3.head(10)",
"_____no_output_____"
],
[
"%%time\n### One Week\ndata4 = sq.read.parquet(\"hdfs://\"+hdfsNameNode+\":\"+hdfsPort+\"/data/2016-06-0[1-7]*\")\npdf4 = data4.filter(data4.deviceVendor == \"Symantec\") \\\n .filter(data4.message.like(\"%exe%\")) \\\n .toPandas()",
"_____no_output_____"
],
[
"### Number of results\nlen(pdf4)",
"_____no_output_____"
],
[
"### Display the first 10 results\npdf4.head(10)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e732a2cb8584debfd965b9785e61299b4077b432 | 9,825 | ipynb | Jupyter Notebook | textbooks/NaturalLanguageProcessingPythonAndNLTK/Ch02_Text_wrangling_and_processing.ipynb | sudhu26/data-science-portfolio | 88f7a350cbd9245e4f92ff1829e49c5d378c609d | [
"MIT"
] | null | null | null | textbooks/NaturalLanguageProcessingPythonAndNLTK/Ch02_Text_wrangling_and_processing.ipynb | sudhu26/data-science-portfolio | 88f7a350cbd9245e4f92ff1829e49c5d378c609d | [
"MIT"
] | null | null | null | textbooks/NaturalLanguageProcessingPythonAndNLTK/Ch02_Text_wrangling_and_processing.ipynb | sudhu26/data-science-portfolio | 88f7a350cbd9245e4f92ff1829e49c5d378c609d | [
"MIT"
] | 1 | 2021-03-26T11:47:37.000Z | 2021-03-26T11:47:37.000Z | 21.930804 | 204 | 0.527735 | [
[
[
"# TOC",
"_____no_output_____"
],
[
" __Chapter 2 - Text wrangling and processing__\n\n1. [Import](#Import)\n1. [Text wrangling](#Text-wrangling)\n1. [Tokenization](#Tokenization)\n1. [Stemming](#Stemming)\n1. [Lemmatization](#Lemmatization)\n1. [Stop word removal](#Stop-word-removal)\n1. [Spelling correction](#Spelling-correction)",
"_____no_output_____"
],
[
"# Import",
"_____no_output_____"
],
[
"<a id = 'Import'></a>",
"_____no_output_____"
]
],
[
[
"# Standard libary and settings\nimport os\nimport sys\nimport importlib\nimport itertools\nimport warnings\n\nwarnings.simplefilter(\"ignore\")\nfrom IPython.core.display import display, HTML\n\ndisplay(HTML(\"<style>.container { width:95% !important; }</style>\"))\n\n# Data extensions and settings\nimport numpy as np\n\nnp.set_printoptions(threshold=np.inf, suppress=True)\nimport pandas as pd\n\npd.set_option(\"display.max_rows\", 500)\npd.set_option(\"display.max_columns\", 500)\npd.options.display.float_format = \"{:,.6f}\".format\n\n# Modeling extensions\nimport nltk\n\n# Visualization extensions and settings\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nsns.set_style(\"whitegrid\")",
"_____no_output_____"
]
],
[
[
"# Text wrangling\n",
"_____no_output_____"
],
[
"<a id = 'Text-wrangling'></a>",
"_____no_output_____"
]
],
[
[
"nltk.download()",
"showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml\n"
],
[
"# split into sentences using sent_tokenize\nfrom nltk.tokenize import sent_tokenize\n\ninputstring = \"this is an example sent. the sentence splitter will split on sent markers. Ohh really!!\"\n\nall_sent = sent_tokenize(inputstring)\nprint(all_sent)",
"['this is an example sent.', 'the sentence splitter will split on sent markers.', 'Ohh really!', '!']\n"
],
[
"# create a custom sentence splitter\nimport nltk.tokenize.punkt\n\ntokenizer = nltk.tokenize.PunktSentenceTokenizer()",
"_____no_output_____"
]
],
[
[
"# Tokenization\n\nA token, aka a word, is the minimal unit that a machine can evaluate and process. Tokenization is the process of splitting text data down to the point of building a collection of individual words.\n",
"_____no_output_____"
],
[
"<a id = 'Tokenization'></a>",
"_____no_output_____"
]
],
[
[
"# simple split using basic Python\ns = \"Hi everyone! hola gr8\"\nprint(s.split())",
"['Hi', 'everyone!', 'hola', 'gr8']\n"
],
[
"# simple split nltk\nfrom nltk.tokenize import word_tokenize\n\nword_tokenize(s)",
"_____no_output_____"
],
[
"# basic examples with various tokenizers\nfrom nltk.tokenize import regexp_tokenize, wordpunct_tokenize, blankline_tokenize\n\nprint(regexp_tokenize(s, pattern=\"\\w+\"))\nprint(regexp_tokenize(s, pattern=\"\\d+\"))\nprint(wordpunct_tokenize(s))\nprint(blankline_tokenize(s))",
"['Hi', 'everyone', 'hola', 'gr8']\n['8']\n['Hi', 'everyone', '!', 'hola', 'gr8']\n['Hi everyone! hola gr8']\n"
]
],
[
[
"# Stemming\n\nStemming is the process of reducing a token down to its stem, i.e. reducing 'eating' down to 'eat'\n",
"_____no_output_____"
],
[
"<a id = 'Stemming'></a>",
"_____no_output_____"
]
],
[
[
"# basic stemming examples\nfrom nltk.stem import PorterStemmer\nfrom nltk.stem.lancaster import LancasterStemmer\n\npst = PorterStemmer()\nlst = LancasterStemmer()\nprint(lst.stem(\"eating\"))\nprint(pst.stem(\"shopping\"))",
"eat\nshop\n"
]
],
[
[
"# Lemmatization\n\nLemmatization is a more precide way of converting tokens to their roots. Lemmatization uses context and parts of speech to determine how to get to the root, aka lemma.\n",
"_____no_output_____"
],
[
"<a id = 'Lemmatization'></a>",
"_____no_output_____"
]
],
[
[
"# lemmatization that uses wordnet, a semantic dictionary for performing lookups\nfrom nltk.stem import WordNetLemmatizer\n\nwlem = WordNetLemmatizer()\nwlem.lemmatize(\"dogs\")",
"_____no_output_____"
]
],
[
[
"# Stop word removal\n\nStop word removal is the process is removing words that occur commonly across documents and generally have no significance. These stop words lists are typically hand-curated lists of words\n",
"_____no_output_____"
],
[
"<a id = 'Stop-word-removal'></a>",
"_____no_output_____"
]
],
[
[
"# remove stop words from a sample sentence\nfrom nltk.corpus import stopwords\n\nstoplist = stopwords.words(\"english\")\ntext = \"this is just a test, only a test\"\ncleanwords = [word for word in text.split() if word not in stoplist]\nprint(cleanwords)",
"['test,', 'test']\n"
]
],
[
[
"# Spelling correction\n\nNLTK includes an algorithm called edit-distance that can be used to perform fuzzy string matching.\n",
"_____no_output_____"
],
[
"<a id = 'Spelling-correction'></a>",
"_____no_output_____"
]
],
[
[
"# calculate Levenshtein distance between two words\nfrom nltk.metrics import edit_distance\n\nedit_distance(\"rain\", \"shine\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e732c0f7c29a1a4c73e3ca2d604144d32a2cdf7f | 82,301 | ipynb | Jupyter Notebook | example/entities/load-entities.ipynb | Jeansding/Malaya | fdf1af178ecc5ec4575298612101362ccc4a94fb | [
"MIT"
] | 2 | 2019-06-23T20:19:22.000Z | 2020-04-16T13:02:32.000Z | example/entities/load-entities.ipynb | Jeansding/Malaya | fdf1af178ecc5ec4575298612101362ccc4a94fb | [
"MIT"
] | null | null | null | example/entities/load-entities.ipynb | Jeansding/Malaya | fdf1af178ecc5ec4575298612101362ccc4a94fb | [
"MIT"
] | null | null | null | 59.46604 | 1,278 | 0.520358 | [
[
[
"%%time\nimport malaya",
"CPU times: user 11.9 s, sys: 1.48 s, total: 13.4 s\nWall time: 17.4 s\n"
]
],
[
[
"## List available deep learning NER models",
"_____no_output_____"
]
],
[
[
"malaya.entity.available_deep_model()",
"_____no_output_____"
]
],
[
[
"## Describe supported entities",
"_____no_output_____"
]
],
[
[
"malaya.describe_entities()",
"OTHER - Other\nlaw - law, regulation, related law documents, documents, etc\nlocation - location, place\norganization - organization, company, government, facilities, etc\nperson - person, group of people, believes, etc\nquantity - numbers, quantity\ntime - date, day, time, etc\nevent - unique event happened, etc\n"
],
[
"string = 'KUALA LUMPUR: Sempena sambutan Aidilfitri minggu depan, Perdana Menteri Tun Dr Mahathir Mohamad dan Menteri Pengangkutan Anthony Loke Siew Fook menitipkan pesanan khas kepada orang ramai yang mahu pulang ke kampung halaman masing-masing. Dalam video pendek terbitan Jabatan Keselamatan Jalan Raya (JKJR) itu, Dr Mahathir menasihati mereka supaya berhenti berehat dan tidur sebentar sekiranya mengantuk ketika memandu.'",
"_____no_output_____"
]
],
[
[
"## Load CRF model",
"_____no_output_____"
]
],
[
[
"crf = malaya.entity.crf()\ncrf.predict(string)",
"_____no_output_____"
]
],
[
[
"## Load Case-Sensitive CRF model",
"_____no_output_____"
]
],
[
[
"crf = malaya.entity.crf(sensitive = True)\ncrf.predict(string)",
"_____no_output_____"
]
],
[
[
"## Print important features from CRF model",
"_____no_output_____"
]
],
[
[
"crf.print_features(10)",
"Top-10 positive:\n14.340635 person word:pengarah\n11.162717 person prev_word:perbendaharaan\n10.906426 location word:dibuat-buat\n10.462828 person word:berkelulusan\n9.680613 organization word:pas\n9.152880 person word:Presidennya\n8.668067 OTHER prev_word:bergabungnya\n8.637761 location word:Iran\n8.336057 person word:dinaungi\n8.233552 person word:Johan\n\nTop-10 negative:\n-5.274524 OTHER prev_word:pelantikan\n-5.344889 OTHER word:pembangkang\n-5.375710 OTHER word:terminal\n-5.699221 person is_numeric\n-5.855398 organization suffix-3:ari\n-6.036876 OTHER word:memintanya\n-6.082631 OTHER word:pengasuhnya\n-6.278501 person next_word-prefix-2:Kp\n-6.818189 OTHER prefix-3:di-\n-7.422581 person suffix-3:ada\n"
]
],
[
[
"## Print important transitions from CRF Model",
"_____no_output_____"
]
],
[
[
"crf.print_transitions(10)",
"Top-10 likely transitions:\nOTHER -> OTHER 4.720173\norganization -> organization 4.512877\nevent -> event 4.286578\nquantity -> quantity 4.244444\nperson -> person 4.099601\nlocation -> location 4.051204\nlaw -> law 3.888215\ntime -> time 2.618322\nOTHER -> location 0.361435\nOTHER -> person 0.255809\n\nTop-10 unlikely transitions:\norganization -> event -4.005846\nquantity -> location -4.030371\nlaw -> organization -4.154642\ntime -> person -4.226871\nquantity -> organization -4.251120\nperson -> law -4.379608\nlaw -> time -4.421451\norganization -> time -4.700082\ntime -> quantity -7.386138\nquantity -> time -7.824427\n"
]
],
[
[
"## Load deep learning models",
"_____no_output_____"
]
],
[
[
"for i in malaya.entity.available_deep_model():\n print('Testing %s model'%(i))\n model = malaya.entity.deep_model(i)\n print(model.predict(string))\n print()",
"Testing concat model\n[('kuala', 'location'), ('lumpur', 'location'), ('sempena', 'OTHER'), ('sambutan', 'event'), ('aidilfitri', 'event'), ('minggu', 'time'), ('depan', 'time'), ('perdana', 'person'), ('menteri', 'person'), ('tun', 'person'), ('dr', 'person'), ('mahathir', 'person'), ('mohamad', 'person'), ('dan', 'OTHER'), ('menteri', 'OTHER'), ('pengangkutan', 'OTHER'), ('anthony', 'person'), ('loke', 'person'), ('siew', 'person'), ('fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'location'), ('halaman', 'location'), ('masing-masing', 'OTHER'), ('dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('jabatan', 'OTHER'), ('keselamatan', 'OTHER'), ('jalan', 'location'), ('raya', 'organization'), ('jkjr', 'event'), ('itu', 'OTHER'), ('dr', 'person'), ('mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting bahdanau model\n[('kuala', 'location'), ('lumpur', 'location'), ('sempena', 'OTHER'), ('sambutan', 'event'), ('aidilfitri', 'event'), ('minggu', 'time'), ('depan', 'OTHER'), ('perdana', 'person'), ('menteri', 'person'), ('tun', 'person'), ('dr', 'person'), ('mahathir', 'person'), ('mohamad', 'person'), ('dan', 'OTHER'), ('menteri', 'organization'), ('pengangkutan', 'organization'), ('anthony', 'person'), ('loke', 'person'), ('siew', 'person'), ('fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'OTHER'), ('halaman', 'OTHER'), ('masing-masing', 'OTHER'), ('dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('jabatan', 'OTHER'), ('keselamatan', 'OTHER'), ('jalan', 'organization'), ('raya', 'organization'), ('jkjr', 'OTHER'), ('itu', 'OTHER'), ('dr', 'person'), ('mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting luong model\n[('kuala', 'location'), ('lumpur', 'location'), ('sempena', 'OTHER'), ('sambutan', 'event'), ('aidilfitri', 'event'), ('minggu', 'time'), ('depan', 'time'), ('perdana', 'person'), ('menteri', 'person'), ('tun', 'person'), ('dr', 'person'), ('mahathir', 'person'), ('mohamad', 'person'), ('dan', 'OTHER'), ('menteri', 'OTHER'), ('pengangkutan', 'person'), ('anthony', 'person'), ('loke', 'person'), ('siew', 'person'), ('fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'OTHER'), ('halaman', 'OTHER'), ('masing-masing', 'OTHER'), ('dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('jabatan', 'OTHER'), ('keselamatan', 'OTHER'), ('jalan', 'organization'), ('raya', 'organization'), ('jkjr', 'person'), ('itu', 'OTHER'), ('dr', 'person'), ('mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting entity-network model\n[('kuala', 'location'), ('lumpur', 'location'), ('sempena', 'OTHER'), ('sambutan', 'OTHER'), ('aidilfitri', 'event'), ('minggu', 'event'), ('depan', 'OTHER'), ('perdana', 'OTHER'), ('menteri', 'OTHER'), ('tun', 'person'), ('dr', 'person'), ('mahathir', 'person'), ('mohamad', 'person'), ('dan', 'OTHER'), ('menteri', 'OTHER'), ('pengangkutan', 'OTHER'), ('anthony', 'person'), ('loke', 'person'), ('siew', 'person'), ('fook', 'person'), ('menitipkan', 'event'), ('pesanan', 'event'), ('khas', 'event'), ('kepada', 'OTHER'), ('orang', 'person'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'OTHER'), ('halaman', 'OTHER'), ('masing-masing', 'OTHER'), ('dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('jabatan', 'organization'), ('keselamatan', 'organization'), ('jalan', 'organization'), ('raya', 'organization'), ('jkjr', 'organization'), ('itu', 'OTHER'), ('dr', 'person'), ('mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting attention model\n[('kuala', 'location'), ('lumpur', 'location'), ('sempena', 'OTHER'), ('sambutan', 'event'), ('aidilfitri', 'event'), ('minggu', 'event'), ('depan', 'OTHER'), ('perdana', 'person'), ('menteri', 'OTHER'), ('tun', 'person'), ('dr', 'person'), ('mahathir', 'person'), ('mohamad', 'person'), ('dan', 'OTHER'), ('menteri', 'OTHER'), ('pengangkutan', 'organization'), ('anthony', 'person'), ('loke', 'person'), ('siew', 'person'), ('fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'location'), ('halaman', 'location'), ('masing-masing', 'OTHER'), ('dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('jabatan', 'organization'), ('keselamatan', 'OTHER'), ('jalan', 'organization'), ('raya', 'organization'), ('jkjr', 'organization'), ('itu', 'OTHER'), ('dr', 'person'), ('mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\n"
]
],
[
[
"## Load Case-Sensitive deep learning models",
"_____no_output_____"
]
],
[
[
"for i in malaya.entity.available_deep_model():\n print('Testing %s model'%(i))\n model = malaya.entity.deep_model(i, sensitive = True)\n print(model.predict(string))\n print()",
"Testing concat model\n[('Kuala', 'location'), ('Lumpur', 'location'), ('Sempena', 'OTHER'), ('sambutan', 'time'), ('Aidilfitri', 'time'), ('minggu', 'OTHER'), ('depan', 'OTHER'), ('Perdana', 'person'), ('Menteri', 'person'), ('Tun', 'person'), ('Dr', 'person'), ('Mahathir', 'person'), ('Mohamad', 'person'), ('dan', 'OTHER'), ('Menteri', 'person'), ('Pengangkutan', 'person'), ('Anthony', 'person'), ('Loke', 'person'), ('Siew', 'person'), ('Fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'person'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'person'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'location'), ('halaman', 'OTHER'), ('masing-masing', 'OTHER'), ('Dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('Jabatan', 'organization'), ('Keselamatan', 'organization'), ('Jalan', 'organization'), ('Raya', 'law'), ('Jkjr', 'time'), ('itu', 'time'), ('Dr', 'person'), ('Mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting bahdanau model\n[('Kuala', 'location'), ('Lumpur', 'location'), ('Sempena', 'OTHER'), ('sambutan', 'OTHER'), ('Aidilfitri', 'event'), ('minggu', 'time'), ('depan', 'OTHER'), ('Perdana', 'person'), ('Menteri', 'person'), ('Tun', 'person'), ('Dr', 'person'), ('Mahathir', 'person'), ('Mohamad', 'person'), ('dan', 'OTHER'), ('Menteri', 'person'), ('Pengangkutan', 'person'), ('Anthony', 'person'), ('Loke', 'person'), ('Siew', 'person'), ('Fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'person'), ('halaman', 'person'), ('masing-masing', 'OTHER'), ('Dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('Jabatan', 'organization'), ('Keselamatan', 'organization'), ('Jalan', 'organization'), ('Raya', 'organization'), ('Jkjr', 'organization'), ('itu', 'OTHER'), ('Dr', 'person'), ('Mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting luong model\n[('Kuala', 'location'), ('Lumpur', 'location'), ('Sempena', 'OTHER'), ('sambutan', 'OTHER'), ('Aidilfitri', 'event'), ('minggu', 'OTHER'), ('depan', 'OTHER'), ('Perdana', 'person'), ('Menteri', 'person'), ('Tun', 'person'), ('Dr', 'person'), ('Mahathir', 'person'), ('Mohamad', 'person'), ('dan', 'OTHER'), ('Menteri', 'person'), ('Pengangkutan', 'person'), ('Anthony', 'person'), ('Loke', 'person'), ('Siew', 'person'), ('Fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'OTHER'), ('halaman', 'OTHER'), ('masing-masing', 'OTHER'), ('Dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('Jabatan', 'organization'), ('Keselamatan', 'organization'), ('Jalan', 'organization'), ('Raya', 'organization'), ('Jkjr', 'organization'), ('itu', 'OTHER'), ('Dr', 'person'), ('Mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting entity-network model\n[('Kuala', 'location'), ('Lumpur', 'location'), ('Sempena', 'OTHER'), ('sambutan', 'OTHER'), ('Aidilfitri', 'event'), ('minggu', 'OTHER'), ('depan', 'OTHER'), ('Perdana', 'OTHER'), ('Menteri', 'OTHER'), ('Tun', 'person'), ('Dr', 'person'), ('Mahathir', 'person'), ('Mohamad', 'person'), ('dan', 'OTHER'), ('Menteri', 'OTHER'), ('Pengangkutan', 'person'), ('Anthony', 'person'), ('Loke', 'person'), ('Siew', 'person'), ('Fook', 'person'), ('menitipkan', 'person'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'person'), ('ramai', 'person'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'OTHER'), ('halaman', 'location'), ('masing-masing', 'OTHER'), ('Dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('Jabatan', 'organization'), ('Keselamatan', 'organization'), ('Jalan', 'organization'), ('Raya', 'organization'), ('Jkjr', 'organization'), ('itu', 'OTHER'), ('Dr', 'person'), ('Mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\nTesting attention model\n[('Kuala', 'person'), ('Lumpur', 'person'), ('Sempena', 'OTHER'), ('sambutan', 'OTHER'), ('Aidilfitri', 'time'), ('minggu', 'time'), ('depan', 'time'), ('Perdana', 'person'), ('Menteri', 'person'), ('Tun', 'person'), ('Dr', 'person'), ('Mahathir', 'person'), ('Mohamad', 'person'), ('dan', 'OTHER'), ('Menteri', 'OTHER'), ('Pengangkutan', 'OTHER'), ('Anthony', 'person'), ('Loke', 'person'), ('Siew', 'person'), ('Fook', 'person'), ('menitipkan', 'OTHER'), ('pesanan', 'OTHER'), ('khas', 'OTHER'), ('kepada', 'OTHER'), ('orang', 'OTHER'), ('ramai', 'OTHER'), ('yang', 'OTHER'), ('mahu', 'OTHER'), ('pulang', 'OTHER'), ('ke', 'OTHER'), ('kampung', 'OTHER'), ('halaman', 'OTHER'), ('masing-masing', 'OTHER'), ('Dalam', 'OTHER'), ('video', 'OTHER'), ('pendek', 'OTHER'), ('terbitan', 'OTHER'), ('Jabatan', 'event'), ('Keselamatan', 'event'), ('Jalan', 'event'), ('Raya', 'event'), ('Jkjr', 'event'), ('itu', 'OTHER'), ('Dr', 'person'), ('Mahathir', 'person'), ('menasihati', 'OTHER'), ('mereka', 'OTHER'), ('supaya', 'OTHER'), ('berhenti', 'OTHER'), ('berehat', 'OTHER'), ('dan', 'OTHER'), ('tidur', 'OTHER'), ('sebentar', 'OTHER'), ('sekiranya', 'OTHER'), ('mengantuk', 'OTHER'), ('ketika', 'OTHER'), ('memandu', 'OTHER')]\n\n"
]
],
[
[
"## Print important features from deep learning model",
"_____no_output_____"
]
],
[
[
"bahdanau = malaya.entity.deep_model('bahdanau')\nbahdanau.print_features(10)",
"Top-10 positive:\nmade: 4.456522\neffendi: 3.826650\ndipo: 3.723355\ndjamil: 3.653246\nnoorfadila: 3.638877\nahad: 3.611547\nkinabalu: 3.601939\nyorrys: 3.546461\n2008: 3.510597\nustaz: 3.450228\n\nTop-10 negative:\nmemilih: -3.813004\ngentar: -3.738811\nkenalan: -3.586572\nmelanjutkan: -3.510132\nistilah: -3.410603\nseusai: -3.405963\nkepolisian: -3.371908\nperwira: -3.364473\npadi: -3.242083\nperusahaan: -3.196474\n"
]
],
[
[
"## Print important transitions from deep learning model",
"_____no_output_____"
]
],
[
[
"bahdanau.print_transitions(10)",
"Top-10 likely transitions:\nquantity -> quantity: 0.768479\nlaw -> law: 0.748858\nevent -> event: 0.671466\ntime -> time: 0.566861\nquantity -> PAD: 0.515885\norganization -> time: 0.430649\nPAD -> law: 0.396928\ntime -> person: 0.387298\ntime -> organization: 0.380183\nOTHER -> time: 0.346963\n\nTop-10 unlikely transitions:\nperson -> law: -0.959066\nlaw -> person: -0.763240\nevent -> organization: -0.744430\nperson -> event: -0.647477\ntime -> event: -0.640794\nlaw -> OTHER: -0.634643\norganization -> event: -0.629229\norganization -> OTHER: -0.606970\nOTHER -> law: -0.598875\nOTHER -> event: -0.598665\n"
]
],
[
[
"## Visualize output alignment from attention\n\nThis visualization only can call from `bahdanau` or `luong` model.",
"_____no_output_____"
]
],
[
[
"d_object, predicted, state_fw, state_bw = bahdanau.get_alignment(string)",
"_____no_output_____"
],
[
"d_object.to_graphvis()",
"_____no_output_____"
]
],
[
[
"## Voting stack model",
"_____no_output_____"
]
],
[
[
"entity_network = malaya.entity.deep_model('entity-network')\nbahdanau = malaya.entity.deep_model('bahdanau')\nluong = malaya.entity.deep_model('luong')\nmalaya.stack.voting_stack([entity_network, bahdanau, luong], string)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e732c1a27f09d7ec4d3a176d436a3769fbff39e3 | 20,146 | ipynb | Jupyter Notebook | Main/Autoencoder/Simple Autoencoder 1/Simple Autoencoder.ipynb | MalcolmGomes/CPS040-Thesis | 1d7a750169f56923ffbd14d96c7c8e4c5d377bf9 | [
"MIT"
] | null | null | null | Main/Autoencoder/Simple Autoencoder 1/Simple Autoencoder.ipynb | MalcolmGomes/CPS040-Thesis | 1d7a750169f56923ffbd14d96c7c8e4c5d377bf9 | [
"MIT"
] | null | null | null | Main/Autoencoder/Simple Autoencoder 1/Simple Autoencoder.ipynb | MalcolmGomes/CPS040-Thesis | 1d7a750169f56923ffbd14d96c7c8e4c5d377bf9 | [
"MIT"
] | null | null | null | 85.004219 | 1,765 | 0.636901 | [
[
[
"## Import Libraries",
"_____no_output_____"
]
],
[
[
"import os\nimport torch\nimport torchvision\n\nfrom torch import nn\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\nfrom torchvision.datasets import MNIST\nfrom torchvision.utils import save_image",
"_____no_output_____"
],
[
"if not os.path.exists('./mlp_img'):\n os.mkdir('./mlp_img')\n\n\ndef to_img(x):\n x = 0.5 * (x + 1)\n x = x.clamp(0, 1)\n x = x.view(x.size(0), 1, 28, 28)\n return x\n\n\nnum_epochs = 100\nbatch_size = 128\nlearning_rate = 1e-3\n\n# img_transform = transforms.Compose([\n# transforms.ToTensor(),\n# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n# ])\n\nimg_transform = transforms.Compose([transforms.ToTensor(),\ntransforms.Normalize((0.5,), (0.5,))\n])\n\ndataset = MNIST('./data', transform=img_transform, download=True)\ndataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)\n\n\nclass autoencoder(nn.Module):\n def __init__(self):\n super(autoencoder, self).__init__()\n self.encoder = nn.Sequential(\n nn.Linear(28 * 28, 128),\n nn.ReLU(True),\n nn.Linear(128, 64),\n nn.ReLU(True), nn.Linear(64, 12), nn.ReLU(True), nn.Linear(12, 3))\n self.decoder = nn.Sequential(\n nn.Linear(3, 12),\n nn.ReLU(True),\n nn.Linear(12, 64),\n nn.ReLU(True),\n nn.Linear(64, 128),\n nn.ReLU(True), nn.Linear(128, 28 * 28), nn.Tanh())\n\n def forward(self, x):\n x = self.encoder(x)\n x = self.decoder(x)\n return x\n\n\n\nmodel = autoencoder().cuda()\ncriterion = nn.MSELoss()\noptimizer = torch.optim.Adam(\n model.parameters(), lr=learning_rate, weight_decay=1e-5)\nprint('Test')\nfor epoch in range(num_epochs):\n for data in dataloader: \n img, = data\n img = img.view(img.size(0), -1)\n img = Variable(img).cuda()\n # ===================forward=====================\n output = model(img)\n loss = criterion(output, img)\n # ===================backward====================\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n # ===================log========================\n print('epoch [{}/{}], loss:{:.4f}'\n .format(epoch + 1, num_epochs, loss.item()))\n if epoch % 10 == 0:\n pic = to_img(output.cpu().data)\n save_image(pic, './mlp_img/image_{}.png'.format(epoch))\n\ntorch.save(model.state_dict(), './sim_autoencoder.pth')",
"Test\n"
],
[
"import torch\nfrom torchvision.transforms import transforms\nfrom PIL import Image\nfrom pathlib import Path\n\nclass autoencoder(nn.Module):\n def __init__(self):\n super(autoencoder, self).__init__()\n self.encoder = nn.Sequential(\n nn.Linear(28 * 28, 128),\n nn.ReLU(True),\n nn.Linear(128, 64),\n nn.ReLU(True), nn.Linear(64, 12), nn.ReLU(True), nn.Linear(12, 3))\n self.decoder = nn.Sequential(\n nn.Linear(3, 12),\n nn.ReLU(True),\n nn.Linear(12, 64),\n nn.ReLU(True),\n nn.Linear(64, 128),\n nn.ReLU(True), nn.Linear(128, 28 * 28), nn.Tanh())\n\n def forward(self, x):\n x = self.encoder(x)\n x = self.decoder(x)\n return x\n\nmodel = autoencoder().cuda()\ncheckpoint = torch.load('sim_autoencoder.pth')\nmodel.load_state_dict(checkpoint)\ntrans = transforms.Compose([\n transforms.RandomHorizontalFlip(),\n transforms.Resize(32),\n transforms.CenterCrop(32),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5))\n ])\n\nimage = Image.open('panda.jpg')\n\ninput = trans(image)\n\ninput = input.view(1, 3, 32,32)\n\noutput = model(input)\n\nprediction = int(torch.max(output.data, 1)[1].numpy())\nprint(prediction)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e732c6eab2a5de191d2da212ae59559917b7cbd0 | 197,405 | ipynb | Jupyter Notebook | jupyter-notebooks/src/microstructural/Microstructural-Features.ipynb | BlackSwine/compendium | 8ca631d79605c4e34ef2cec1dc5a469ab7639623 | [
"BSD-3-Clause-Clear",
"Unlicense"
] | 1 | 2021-02-25T13:59:32.000Z | 2021-02-25T13:59:32.000Z | jupyter-notebooks/src/microstructural/Microstructural-Features.ipynb | BlackSwine/compendium | 8ca631d79605c4e34ef2cec1dc5a469ab7639623 | [
"BSD-3-Clause-Clear",
"Unlicense"
] | 1 | 2020-03-19T15:07:58.000Z | 2020-03-19T15:07:58.000Z | jupyter-notebooks/src/microstructural/Microstructural-Features.ipynb | BlackSwine/compendium | 8ca631d79605c4e34ef2cec1dc5a469ab7639623 | [
"BSD-3-Clause-Clear",
"Unlicense"
] | 2 | 2021-08-04T11:28:01.000Z | 2021-09-01T14:51:55.000Z | 286.925872 | 42,508 | 0.924095 | [
[
[
"import mlfinlab\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n",
"_____no_output_____"
],
[
"help(mlfinlab)\n# load data\ndata = pd.read_csv('./ES.csv', nrows=100000)\ndata.head()",
"Help on package mlfinlab:\n\nNAME\n mlfinlab - Package based on the text book: Advances in Financial Machine Learning, by Marcos Lopez de Prado\n\nPACKAGE CONTENTS\n data_structures (package)\n features (package)\n filters (package)\n labeling (package)\n multi_product (package)\n sample_weights (package)\n sampling (package)\n tests (package)\n util (package)\n\nSUBMODULES\n fracdiff\n microstructural\n\nFILE\n /Users/maksimivanov/research/Chapter19/mlfinlab/__init__.py\n\n\n"
]
],
[
[
"# Microstructure features",
"_____no_output_____"
],
[
"Market microstructure features aim to tease out useful information from the trading behavior of market participants on exchanges. These features have become more popular with the increased amount and granularity of data provided by exchanges. As a result, multiple models of liquidity, uncertainty, and price impact have emerged from this data.",
"_____no_output_____"
],
[
"## First generation: price sequences",
"_____no_output_____"
],
[
"### The Tick Rule",
"_____no_output_____"
]
],
[
[
"# help(mlfinlab.features.microstructural)\nfrom mlfinlab.features.microstructural import tick_rule\naggressor = tick_rule(data['Price'])\n",
"_____no_output_____"
]
],
[
[
"### The Roll Model",
"_____no_output_____"
]
],
[
[
"from mlfinlab.features.microstructural import roll_model\nspread, noise = roll_model(data['Price'])\nspread, noise\n",
"_____no_output_____"
]
],
[
[
"### High-Low Volatility Estimator",
"_____no_output_____"
]
],
[
[
"# first create some bars \nfrom mlfinlab.data_structures import get_dollar_bars\nfrom mlfinlab.features.microstructural import high_low_estimator\n\ndate_time = data['Date and Time'] \nnew_data = pd.concat([date_time, data['Price'], data['Volume']], axis=1)\nnew_data.columns = ['date', 'price', 'volume']\nprint(new_data.head(20))\nprint('\\n')\nprint('Rows:', new_data.shape[0])\nnew_data.to_csv('./maks_tick_data.csv', index=False)\n\nth = 10000\nbars = get_dollar_bars('./maks_tick_data.csv', threshold=th, batch_size=1000000, verbose=True)\nprint('Bars:', bars.shape[0])\n\nvol = high_low_estimator(bars.high, bars.low, window=50)\nplt.figure(figsize=(10, 5))\nvol.plot()\nplt.figure(figsize=(10, 5))\nbars.close.plot()",
" date price volume\n0 2017/01/02 17:00:00.077 2240.75 1360\n1 2017/01/02 17:00:00.140 2241.00 1\n2 2017/01/02 17:00:00.140 2241.00 5\n3 2017/01/02 17:00:00.140 2241.00 1\n4 2017/01/02 17:00:00.140 2240.75 15\n5 2017/01/02 17:00:00.140 2240.75 2\n6 2017/01/02 17:00:00.140 2240.75 1\n7 2017/01/02 17:00:00.140 2240.75 3\n8 2017/01/02 17:00:00.140 2240.75 1\n9 2017/01/02 17:00:00.207 2241.00 8\n10 2017/01/02 17:00:00.207 2240.75 1\n11 2017/01/02 17:00:00.207 2241.00 30\n12 2017/01/02 17:00:00.207 2241.00 3\n13 2017/01/02 17:00:00.260 2241.25 19\n14 2017/01/02 17:00:00.260 2241.25 1\n15 2017/01/02 17:00:00.288 2241.00 1\n16 2017/01/02 17:00:00.288 2241.00 2\n17 2017/01/02 17:00:00.288 2241.25 1\n18 2017/01/02 17:00:00.288 2241.00 1\n19 2017/01/02 17:00:00.288 2241.00 2\n\n\nRows: 100000\nReading data in batches:\nBatch number: 0\nReturning bars \n\nBars: 48818\n"
]
],
[
[
"### Corwin-Shultz Algorithm",
"_____no_output_____"
]
],
[
[
"from mlfinlab.features.microstructural import corwin_shultz_spread, becker_parkinson_volatility\nspread, start_ind = corwin_shultz_spread(bars.high, bars.low, 100)\nvol = becker_parkinson_volatility(bars.high, bars.low, 100)\n\nplt.figure(figsize=(10, 5))\nspread.plot()\nplt.figure(figsize=(10, 5))\nvol.plot()",
"_____no_output_____"
]
],
[
[
"## Second generation: strategic trade models",
"_____no_output_____"
],
[
"### Kyle's Lambda",
"_____no_output_____"
]
],
[
[
"from mlfinlab.data_structures import BarFeature\nfrom mlfinlab.features.microstructural import kyles_lambda, dollar_volume\n\nkyles_lambda_feature = BarFeature(name='kyles_lambda', function= lambda df: kyles_lambda(df['price'], df['volume']))\nbars = get_dollar_bars('./maks_tick_data.csv', threshold=70000000, batch_size=1000000,\n additional_features=[kyles_lambda_feature])\n\nplt.figure(figsize=(10, 5))\nbars['kyles_lambda'].hist()",
"Reading data in batches:\nBatch number: 0\nReturning bars \n\n"
]
],
[
[
"### Amihud's Lambda",
"_____no_output_____"
]
],
[
[
"from mlfinlab.features.microstructural import dollar_volume, amihuds_lambda\ndollar_volume_feature = BarFeature(name='dollar_volume', function= lambda df: dollar_volume(df['price'], df['volume']))\nbars = get_dollar_bars('./maks_tick_data.csv', threshold=70000000, batch_size=1000000,\n additional_features=[dollar_volume_feature])\nlambda_ = amihuds_lambda(bars['close'], bars['dollar_volume'])\nlambda_",
"Reading data in batches:\nBatch number: 0\nReturning bars \n\n"
]
],
[
[
"### Hasbrouck's Lambda",
"_____no_output_____"
]
],
[
[
"from mlfinlab.features.microstructural import dollar_volume, hasbroucks_lambda, hasbroucks_flow\n\ndef get_hasbroucks_flow(df):\n tick_signs = tick_rule(df['price'])\n return hasbroucks_flow(df['price'], df['volume'], tick_signs)\n\nhasbroucks_flow_feature = BarFeature(name='hasbroucks_flow', function=get_hasbroucks_flow)\nbars = get_dollar_bars('./maks_tick_data.csv', threshold=70000000, batch_size=1000000,\n additional_features=[hasbroucks_flow_feature])\n\nlambda_ = hasbroucks_lambda(bars['close'], bars['hasbroucks_flow'])\nlambda_",
"Reading data in batches:\nBatch number: 0\nReturning bars \n\n"
]
],
[
[
"## Third generation: sequential trade models",
"_____no_output_____"
],
[
"### Volume-Synchronized Probability of Informed Trading",
"_____no_output_____"
]
],
[
[
"from mlfinlab.features.microstructural import vpin\nfrom mlfinlab.data_structures import get_volume_bars\n\ndef buy_volume(df):\n tick_signs = tick_rule(df['price'])\n return (df['volume'] * (tick_signs > 0)).sum()\n\ndef sell_volume(df):\n tick_signs = tick_rule(df['price'])\n return (df['volume'] * (tick_signs < 0)).sum()\n \nbuy_volume_feature = BarFeature(name='buy_volume', function=buy_volume)\nsell_volume_feature = BarFeature(name='sell_volume', function=sell_volume)\n\nbars = get_volume_bars('./maks_tick_data.csv', additional_features=[buy_volume_feature, sell_volume_feature])\nvolume = 28224\nvpin_series = vpin(bars['buy_volume'], bars['sell_volume'], volume, 5)\nplt.figure(figsize=(10, 5))\nvpin_series.plot()\nplt.figure(figsize=(10, 5))\nbars['close'].plot()",
"Reading data in batches:\nBatch number: 0\nReturning bars \n\n"
]
],
[
[
"## Additional Features",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e732cf72b725d2cf0f6f26fc0d430ff7aff52256 | 1,609 | ipynb | Jupyter Notebook | array_strings/ipynb/max_dictionary.ipynb | PRkudupu/Algo-python | a0b9c3e19e4ece48f5dc47e34860510565ab2f38 | [
"MIT"
] | 1 | 2019-05-04T00:43:52.000Z | 2019-05-04T00:43:52.000Z | array_strings/ipynb/max_dictionary.ipynb | PRkudupu/Algo-python | a0b9c3e19e4ece48f5dc47e34860510565ab2f38 | [
"MIT"
] | null | null | null | array_strings/ipynb/max_dictionary.ipynb | PRkudupu/Algo-python | a0b9c3e19e4ece48f5dc47e34860510565ab2f38 | [
"MIT"
] | null | null | null | 18.709302 | 106 | 0.469236 | [
[
[
"Given an dictionary find the max value in a dictionary. Return the key with the max value<br><br>\n <b>dic ={'a':1,'b':3,'c':2} \n \nop=b",
"_____no_output_____"
]
],
[
[
"def max_dic_key(dic):\n return max(dic,key=dic.get)\ndic ={'a':1,'b':3,'c':2} \nprint(max_dic(dic))",
"b\n"
],
[
"def max_dic_value(dic):\n max_key=max(dic,key=dic.get)\n for k,y in dic.items():\n if k==max_key:\n return y\ndic ={'a':1,'b':3,'c':2} \nprint(max_dic_value(dic))",
"3\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e732d02e2604162a8734c586c824dc575a278f6c | 8,645 | ipynb | Jupyter Notebook | INTEGRATOR/w_integration_methodology.ipynb | john-science/ADVECTOR | 5c5ca7595c2c051f1a088b1f0e694936c3da3610 | [
"MIT"
] | 7 | 2021-09-07T02:32:00.000Z | 2022-01-15T11:35:02.000Z | INTEGRATOR/w_integration_methodology.ipynb | TheOceanCleanupAlgorithms/ADVECT | e27ce15da6a2fcbccbe363f8c2415b0122696d1f | [
"MIT"
] | 1 | 2021-12-20T21:11:33.000Z | 2021-12-20T21:11:33.000Z | INTEGRATOR/w_integration_methodology.ipynb | john-science/ADVECTOR | 5c5ca7595c2c051f1a088b1f0e694936c3da3610 | [
"MIT"
] | 1 | 2021-12-12T15:13:52.000Z | 2021-12-12T15:13:52.000Z | 32.378277 | 393 | 0.596877 | [
[
[
"# Vertical Velocity Generation from Zonal/Meridional Current",
"_____no_output_____"
],
[
"## Summary",
"_____no_output_____"
],
[
"We wish to use the so-called \"Adjoint Method\" described in Luettich 2002 to calculate the vertical velocity $w(z)$ for a column of water. To do so involves three steps:\n\n1. Calculate mass flux into each grid cell due to horizontal currents.\n2. Imposing a boundary condition of $w(h) = 0$, where $h$ is the depth of the seafloor, integrate the mass flux up the water column to arrive at mass flux through the top of each grid cell, which is converted into velocity. This velocity result is the \"traditional\" velocity (as per Luettich 2002), $w_{trad}(z)$.\n3. Apply a correction to $w_{trad}(z)$ to satisfy the second boundary condition $w(z=0) = 0$.",
"_____no_output_____"
],
[
"### Step 1: Calculate Mass Flux into Grid Cells",
"_____no_output_____"
],
[
"Consider a gaussian grid cell as a closed volume (see figure). By continuity, the total mass flow through the six surfaces of this volume must equal zero.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"By using $\\dot{m}$ to represent mass flux, with subscripts $n$ (north), $s$ (south), $e$ (east), $w$ (west), $t$ (top) and $b$ (bottom), this continuity can be expressed as follows:",
"_____no_output_____"
],
[
"$\\dot{m}_w + \\dot{m}_s + \\dot{m}_b = \\dot{m}_e + \\dot{m}_n + \\dot{m}_t$",
"_____no_output_____"
],
[
"Or colloqually, \"mass in = mass out\". Rearranging, we have",
"_____no_output_____"
],
[
"$ \\dot{m}_t - \\dot{m}_b = \\dot{m}_{horizontal} = (\\dot{m}_w - \\dot{m}_e) + (\\dot{m}_s - \\dot{m}_n)$ (1)",
"_____no_output_____"
],
[
"Mass flux is defined as $\\dot{m} = \\rho v A$, where $\\rho$ is the density of the fluid passing through a surface, $v$ is the velocity of the fluid perpendicular to the surface, and $A$ is the the area of the surface. Thus we can rewrite (1) as",
"_____no_output_____"
],
[
"$ \\dot{m}_{horizontal} = (\\rho_w u_w A_w - \\rho_e u_e A_e) + (\\rho_s v_s A_s - \\rho_n v_n A_n)$",
"_____no_output_____"
],
[
"Recognizing $A_w = A_e$, assuming density only changes with z, and approximating the average density along dz as the density at the cell center, $\\rho_{center} = \\rho_w = \\rho_e = \\rho_s = \\rho_n$, we have",
"_____no_output_____"
],
[
"$ \\dot{m}_{horizontal} = \\rho_{center}[A_w (u_w - u_e) + (v_s A_s - v_n A_n)]$ (2)",
"_____no_output_____"
],
[
"We can define $A_w = dy \\cdot dz$, $A_s = dx_s \\cdot dz$, $A_n = dx_n \\cdot dz$, and $\\rho_{center}$ is found emperically from a whole-ocean average vertical density profile.",
"_____no_output_____"
],
[
"Velocities are trickier. We know velocity at the center of each grid cell. But the velocities in equation (2) are defined at the grid cell boundary surfaces. We can estimate these through a linear interpolation, taking the velocity at a surface to be the velocity at the center of the surface. Consider the following 3x3 grid, centered on the grid cell we're concerned with (cell 0):",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"Every point along the boundary between cells 0 and 1 (for example) is equidistant from the centers of cells 0 and 1. Thus, it is sensible to define the velocity everywhere along this (western) boundary as the mean of the velocity of the two cells, $\\mathbf{V_w} = (\\mathbf{V}_0 + \\mathbf{V}_1)/2$; this is a linear interpolation. In this way, we define $u_w, u_e, v_s,$ and $v_n$.",
"_____no_output_____"
],
[
"Since we have defined all the quantities in (2), we can thus use (2) to calculate the horizontal mass flux into every grid cell of the column.",
"_____no_output_____"
],
[
"### Step 2: integrate mass flux upwards, convert to $w$",
"_____no_output_____"
],
[
"We can rearrange eq. (1) to find\n$\\dot{m}_t = \\dot{m}_b + \\dot{m}_{horizontal}$ (3)",
"_____no_output_____"
],
[
"From step 1, we know $\\dot{m}_{horizontal}$ for every grid cell. For the very bottom cell, we know $\\dot{m}_b = 0$ due to our boundary condition $w(h) = 0$. Thusly we can calculate $\\dot{m}_t$ for the first grid cell according to equation 3, thus defining $\\dot{m}_t$ for the cell above; proceeding this way upwards we can calculate $\\dot{m_t}$ for the whole column.",
"_____no_output_____"
],
[
"Finally we can use our definition of mass flux to find $w_t = \\frac{\\dot{m}_t}{\\rho_t A_t}$, and thus have arrived at $w_{trad}(z)$.",
"_____no_output_____"
],
[
"### Step 3: Apply correction to satisfy surface boundary condition",
"_____no_output_____"
],
[
"The nature of daisy-chaining the calculation of each $w_t$ on the previous one means that any systematic errors will grow as we proceed up the water column. Fortunately, we know that at the surface, $w_t = 0$. Thus we can use the \"adjoint method\" from Luettich 2002 to modify the profile such that the boundary conditions are met. The \"adjoint method\" allows you to trade-off ",
"_____no_output_____"
],
[
"We can rearrange eq. (1) to find\n$\\dot{m}_t = \\dot{m}_b + \\dot{m}_{horizontal}$ (3)",
"_____no_output_____"
],
[
"From step 1, we know $\\dot{m}_{horizontal}$ for every grid cell. For the very bottom cell, we know $\\dot{m}_b = 0$ due to our boundary condition $w(h) = 0$. Thusly we can calculate $\\dot{m}_t$ for the first grid cell according to equation 3, thus defining $\\dot{m}_t$ for the cell above; proceeding this way upwards we can calculate $\\dot{m_t}$ for the whole column.",
"_____no_output_____"
],
[
"Finally we can use our definition of mass flux to find $w_t = \\frac{\\dot{m}_t}{\\rho_t A_t}$, and thus have arrived at $w_{trad}(z)$.",
"_____no_output_____"
],
[
"### Step 3: Apply correction to satisfy surface boundary condition",
"_____no_output_____"
],
[
"The nature of daisy-chaining the calculation of each $w_t$ on the previous one means that any systematic errors will grow as we proceed up the water column. Fortunately, we know that at the surface, $w_t = 0$. Thus we can use the \"adjoint method\" from Luettich 2002 to modify the profile such that the boundary conditions are met. The \"adjoint method\" allows you to trade-off ",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e732d11f04c293541356c7c257472855bb41d862 | 10,334 | ipynb | Jupyter Notebook | .ipynb_checkpoints/approach for taillight detection v2 updated-checkpoint.ipynb | Autonomous-Vehicles-Master/elnagdy95 | 944c507c9a61ff2dd4e5ecaeae085eb186202d48 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/approach for taillight detection v2 updated-checkpoint.ipynb | Autonomous-Vehicles-Master/elnagdy95 | 944c507c9a61ff2dd4e5ecaeae085eb186202d48 | [
"MIT"
] | 4 | 2020-09-26T01:20:55.000Z | 2022-02-10T02:09:38.000Z | .ipynb_checkpoints/approach for taillight detection v2 updated-checkpoint.ipynb | Autonomous-Vehicles-Master/elnagdy95 | 944c507c9a61ff2dd4e5ecaeae085eb186202d48 | [
"MIT"
] | null | null | null | 35.030508 | 226 | 0.51229 | [
[
[
"#import argparse\nimport numpy as np\nimport cv2\n#from scipy.misc import imresize\n#from moviepy.editor import VideoFileClip\n#from IPython.display import HTML\nfrom keras.models import load_model\n#from PIL import Image\n\n\n#argparser = argparse.ArgumentParser(\n #description='test FCN8 network for taillights detection')\n\n\n#argparser.add_argument(\n #'-i',\n #'--image',\n #help='path to image file')\n\ndef auto_canny(image, sigma=0.33):\n \n\n # compute the median of the single channel pixel intensities\n v = np.median(image)\n # apply automatic Canny edge detection using the computed median\n lower = int(max(0, (1.0 - sigma) * v))\n upper = int(min(255, (1.0 + sigma) * v))\n edged = cv2.Canny(image, lower, upper)\n # return the edged image\n return edged\n\n\n# Load Keras model\n#model = load_model('full_CNN_model.h5')\n\n# Class to average lanes with\n#class Lanes():\n #def __init__(self):\n #self.recent_fit = []\n #self.avg_fit = []\n\ndef taillight_detect(image):\n \"\"\" Takes in a road image, re-sizes for the model,\n predicts the lane to be drawn from the model in G color,\n recreates an RGB image of a lane and merges with the\n original road image.\n \"\"\"\n model = load_model('full_CNN_model.h5')\n #image1=image\n #image1=np.array(image1)\n #objects=np.squeeze(image,2)\n #rows,cols=objects.shape\n \n rows, cols,_ = image.shape\n \n #cols, rows = image.size\n #cols=160\n #rows=80\n # Get image ready for feeding into model\n \n small_img = cv2.resize(image, (160, 80))\n \n\n #img_y_cr_cb = cv2.cvtColor(small_img, cv2.COLOR_BGR2YCrCb)\n #y, cr, cb = cv2.split(img_y_cr_cb)\n\n # Applying equalize Hist operation on Y channel.\n #y_eq = cv2.equalizeHist(y)\n\n #clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))\n #y_eq = clahe.apply(y)\n\n #img_y_cr_cb_eq = cv2.merge((y_eq, cr, cb))\n #small_img = cv2.cvtColor(img_y_cr_cb_eq, cv2.COLOR_YCR_CB2BGR)\n\n \n #small_img = imresize(image, (80, 160, 3))\n small_img = np.array(small_img)\n small_img = small_img[None,:,:,:]\n\n # Make prediction with neural network (un-normalize value by multiplying by 255)\n prediction = model.predict(small_img)[0] * 255\n\n #new_image = imresize(prediction, (rows, cols, 3))\n\n mask = cv2.resize(prediction, (cols, rows))\n \n img_y_cr_cb = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)\n y, cr, cb = cv2.split(img_y_cr_cb)\n\n # Applying equalize Hist operation on Y channel.\n #y_eq = cv2.equalizeHist(y)\n\n clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))\n y_eq = clahe.apply(y)\n\n img_y_cr_cb_eq = cv2.merge((y_eq, cr, cb))\n image_he = cv2.cvtColor(img_y_cr_cb_eq, cv2.COLOR_YCR_CB2BGR)\n \n gray = cv2.cvtColor(image_he, cv2.COLOR_BGR2GRAY)\n blurred = cv2.GaussianBlur(gray, (3, 3), 0)\n #auto = auto_canny(blurred)\n auto = cv2.Canny(blurred, 10, 200)\n\n #for i in range(rows):\n #x = []\n #for j in range(cols):\n #k = gray[i,j]\n #print(k)\n #x.append(gray[i,j])\n #print(x)\n \n for i in range(rows):\n for j in range(cols):\n if auto[i,j] >0 and mask [i,j]>10:\n auto[i,j]=255\n else:\n auto[i,j]=0\n \n #cv2.imshow('histogram equalisation', auto)\n #cv2.waitKey(0)\n \n #h, w = edges.shape[:2]\n filled_from_bottom = np.zeros((rows, cols))\n for col in range(cols):\n for row in reversed(range(rows)):\n if auto[row][col] < 255: filled_from_bottom[row][col] = 255\n else: break\n \n filled_from_top = np.zeros((rows, cols))\n for col in range(cols):\n for row in range(rows):\n if auto[row][col] < 255: filled_from_top[row][col] = 255\n else: break\n \n filled_from_left = np.zeros((rows, cols))\n for row in range(rows):\n for col in range(cols):\n if auto[row][col] < 255: filled_from_left[row][col] = 255\n else: break\n \n filled_from_right = np.zeros((rows, cols))\n for row in range(rows):\n for col in reversed(range(cols)):\n if auto[row][col] < 255: filled_from_right[row][col] = 255\n else: break\n \n for i in range(rows):\n for j in range(cols):\n if filled_from_bottom[i,j] ==0 and filled_from_top[i,j]==0 and filled_from_right[i,j] ==0 and filled_from_left[i,j]==0:\n auto[i,j]=255\n else:\n auto[i,j]=0\n \n kernel = np.ones((5,5),np.uint8)\n opening = cv2.morphologyEx(auto, cv2.MORPH_OPEN, kernel)\n closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)\n\n cv2.imshow('histogram equalisation', closing)\n cv2.waitKey(0)\n \n blanks = np.zeros_like(closing).astype(np.uint8)\n lane_drawn = np.dstack((closing, blanks, blanks))\n image = cv2.addWeighted(image, 1, lane_drawn, 1, 0)\n \n cv2.imshow('histogram equalisation', image)\n cv2.waitKey(0)\n \n #closing = np.expand_dims(closing, 2) \n #closing = np.repeat(closing, 3, axis=2) # give the mask the same shape as your image\n #colors = {\"red\": [0.0,1.0,1.0], \"blue\": [0.,0.,0.1]} # a dictionary for your colors, experiment with the values\n #colored_mask = np.multiply(closing, colors[\"red\"]) # broadcast multiplication (thanks to the multiplication by 0, you'll end up with values different from 0 only on the relevant channels and the right regions)\n #image = image+colored_mask # element-wise sum (sinc img and mask have the same shape)\n #cv2.imshow('histogram equalisation', image)\n #cv2.waitKey(0)\n \n\n #return image.astype(float) / 255\n\n #return new_image\n #return auto\n return image\n \n#lanes = Lanes()\n\n# Where to save the output video\n#vid_output = 'proj_reg_vid.mp4'\n\n# Location of the input video\n#clip1 = VideoFileClip(\"project_video.mp4\")\n\n#vid_clip = clip1.fl_image(road_lines)\n#vid_clip.write_videofile(vid_output, audio=False)\n\n#def _main_(args):\n #image_path = args.image\n\n\n#im = cv2.imread(\"ft.png\")\n#detected=taillight_detect(im)\n\n\n#cv2.imwrite('detected.jpg',detected)\n\n\n\nimage = cv2.imread(\"ft5.png\")\n\nx=taillight_detect(image)\n\n#cv2.imshow('histogram equalisation', x)\n#cv2.waitKey(0)\n\n#img_y_cr_cb = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)\n#y, cr, cb = cv2.split(img_y_cr_cb)\n\n # Applying equalize Hist operation on Y channel.\n #y_eq = cv2.equalizeHist(y)\n\n#clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))\n#y_eq = clahe.apply(y)\n\n#img_y_cr_cb_eq = cv2.merge((y_eq, cr, cb))\n#image = cv2.cvtColor(img_y_cr_cb_eq, cv2.COLOR_YCR_CB2BGR)\n\n#gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n#blurred = cv2.GaussianBlur(gray, (3, 3), 0)\n\n# apply Canny edge detection using a wide threshold, tight\n# threshold, and automatically determined threshold\n\n#wide = cv2.Canny(blurred, 10, 200)\n#tight = cv2.Canny(blurred, 225, 250)\n#auto = auto_canny(blurred)\n\n# show the images\n#cv2.imshow(\"Original\", image)\n#cv2.imshow(\"Edges\", np.hstack([wide, tight, auto]))\n#cv2.waitKey(0)\n#rows,cols = auto.shape\n#for i in range(rows):\n #x = []\n #for j in range(cols):\n #k = gray[i,j]\n #print(k)\n #x.append(auto[i,j])\n #print(x)\n\n#cv2.imshow('histogram equalisation', detected)\n#cv2.waitKey(0)\n\n#if __name__ == '__main__':\n #args = argparser.parse_args()\n #_main_(args)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e732f0403cb27d6c19cc2baf7c2a7fd1f9fd42dc | 316,224 | ipynb | Jupyter Notebook | presentations/2015-09-17(Sum of Two Sines and Moving Average).ipynb | h-mayorquin/time_series_basic | 654fb67ef6258b3f200c15a2b8068ab9300401d7 | [
"BSD-3-Clause"
] | null | null | null | presentations/2015-09-17(Sum of Two Sines and Moving Average).ipynb | h-mayorquin/time_series_basic | 654fb67ef6258b3f200c15a2b8068ab9300401d7 | [
"BSD-3-Clause"
] | null | null | null | presentations/2015-09-17(Sum of Two Sines and Moving Average).ipynb | h-mayorquin/time_series_basic | 654fb67ef6258b3f200c15a2b8068ab9300401d7 | [
"BSD-3-Clause"
] | null | null | null | 660.175365 | 73,598 | 0.935656 | [
[
[
"# Sum of two sines and moving average\nHere I will study how the statistics and the signal between a sine and its moving average.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport statsmodels.api as sm\n%matplotlib inline",
"_____no_output_____"
],
[
"w1 = 0.3\nw2 = 1.0\ndt = 0.1\nT = 200\nNt = int(T / dt)\nTperiod1 = 20.0\nw1 = (2 * np.pi) / Tperiod1\nTperiod2 = 2.0\nA2 = 10.0\nw2 = (2 * np.pi) / Tperiod2\nA = 5.0\n\n\nt = np.arange(start=0, stop=T, step=dt)\noriginal_signal = np.sin(w1 * t) + A2 * np.sin(w2 * t) \noriginal_signal = A * np.sin(w1 * t) * np.sin(w2 * t)\n\n\n# Get some noise\nnoise = False\nif noise:\n std = 1.0\n noise = np.random.normal(loc=0, scale=std, size=t.size)\n original_signal += original_signal\n\n# Now let's calculate a moving average\nwindow_size = 5.0\nNwindow_size = int(window_size / dt)\na = np.ones(Nwindow_size)\n",
"_____no_output_____"
]
],
[
[
"#### Create the moving average",
"_____no_output_____"
]
],
[
[
"y1 = np.zeros(Nt)\ny2 = np.zeros(Nt)\naux = np.convolve(original_signal, a / Nwindow_size, mode='valid')\ny2[Nwindow_size:] = aux[:-1]\n\nfor index in range(Nwindow_size, Nt):\n x_windowed = original_signal[index - Nwindow_size:index]\n product = np.dot(x_windowed, a) / Nwindow_size\n y1[index] = product\n ",
"_____no_output_____"
],
[
"plt.plot(t, original_signal, label='Original')\nplt.plot(t, y1, label='Own Method')\nplt.plot(t, y2, label='Convolution')\n# plt.ylim([-3, 3])\nplt.legend()",
"_____no_output_____"
]
],
[
[
"#### Now print the correlations",
"_____no_output_____"
]
],
[
[
"print(np.corrcoef(original_signal, y1))\nprint(np.corrcoef(original_signal, y2))",
"[[ 1. -0.04154273]\n [-0.04154273 1. ]]\n[[ 1. -0.04154273]\n [-0.04154273 1. ]]\n"
]
],
[
[
"#### Autocorrelations of the Signal",
"_____no_output_____"
]
],
[
[
"nlags = 200\nt = np.arange(0, int((nlags) * dt) + dt, dt)\n# t = np.linspace(0, int(nlags * dt), num=nlags)\nacf_original = sm.tsa.stattools.acf(original_signal, nlags=nlags)\nacf_y1 = sm.tsa.stattools.acf(y1, nlags=nlags)\nacf_y2 = sm.tsa.stattools.acf(y2, nlags=nlags)",
"_____no_output_____"
],
[
"plt.plot(t, acf_original)",
"_____no_output_____"
],
[
"plt.plot(t, acf_y1)",
"_____no_output_____"
],
[
"plt.plot(t, acf_y2)",
"_____no_output_____"
]
],
[
[
"## Now we Process our data with Nexa",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append(\"../\")\n\nfrom inputs.sensors import Sensor, PerceptualSpace\nfrom inputs.lag_structure import LagStructure\n\n# Visualization libraries\nfrom visualization.sensor_clustering import visualize_cluster_matrix\nfrom visualization.sensors import visualize_SLM\nfrom visualization.sensors import visualize_STDM_seaborn\nfrom visualization.time_cluster import visualize_time_cluster_matrix\nfrom visualization.code_vectors import visualize_code_vectors\n\nfrom nexa.nexa import Nexa",
"_____no_output_____"
],
[
"Tperiod = Tperiod1\nlag_times = np.arange(0, 2 * Tperiod) # Go two times the period\ntau = 2 * Tperiod\nwindow_size = 1 * Tperiod\nNwindowsize = int(window_size / dt)\n# weights = np.exp( -np.arange(Nwindowsize) / tau) \nweights = None\nlag_structure = LagStructure(lag_times=lag_times, weights=weights, window_size=window_size)\nsensor1 = Sensor(original_signal, dt, lag_structure)\nsensor2 = Sensor(y1, dt, lag_structure)\nsensors = [sensor1, sensor2]\nperceptual_space = PerceptualSpace(sensors, lag_first=True)\n\nNspatial_clusters = 4 # Number of spatial clusters\nNtime_clusters = 2 # Number of time clusters\nNembedding = 3 # Dimension of the embedding space\n\n# Now the Nexa object\nnexa_object = Nexa(perceptual_space, Nspatial_clusters,\n Ntime_clusters, Nembedding)\n\n# Make all the calculations\nnexa_object.calculate_all()",
"_____no_output_____"
]
],
[
[
"### Nexa Visualizations",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfig = visualize_SLM(nexa_object)\nplt.show(fig)",
"_____no_output_____"
],
[
"# %matplotlib qt\n# fig = visualize_STDM(nexa_object)\nfig = visualize_STDM_seaborn(nexa_object)\nplt.show(fig)",
"_____no_output_____"
],
[
"%matplotlib inline\nfig = visualize_cluster_matrix(nexa_object)",
"_____no_output_____"
],
[
"%matplotlib inline\ncluster = 0\ntime_center = 0\nfig = visualize_time_cluster_matrix(nexa_object, cluster, time_center,\n cmap='coolwarm', inter='none',\n origin='upper', fontsize=16)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e732f804676ff18f1486d0f99c600fc0a35f89e8 | 14,360 | ipynb | Jupyter Notebook | stellar/stellar_ucl.ipynb | xujiahuayz/stellar_workshop | b1725f8bf51edea06e3f5edf79df9d470cec7c4f | [
"MIT"
] | null | null | null | stellar/stellar_ucl.ipynb | xujiahuayz/stellar_workshop | b1725f8bf51edea06e3f5edf79df9d470cec7c4f | [
"MIT"
] | null | null | null | stellar/stellar_ucl.ipynb | xujiahuayz/stellar_workshop | b1725f8bf51edea06e3f5edf79df9d470cec7c4f | [
"MIT"
] | null | null | null | 31.629956 | 183 | 0.590877 | [
[
[
"# Stellar UCL Workshop\n26 November, 2021\n\nGoal: Issuing an asset on Stellar (called \"XU\") to tokenize your professor's office hours.\n\nSection 1: Configure the SDK\n\nSection 2: Assets & Payments\n- Set up a wallet, and receive funds from the faucet.\n- Issuing an asset on Stellar.\n- Receiving and paying XU asset with a memo, to book a time-slot.\n \nSection 3: The DEX\n- Exchanging and converting assets.\n\nYou'll need:\n\n- Python3 SDK with dependencies installed via:\n ```\n pip install requests\n pip install stellar_sdk\n ```\n- IDE (Visual Studio Code, or other)\n- Code we’ll be going through: [https://bit.ly/stellar-ucl](https://bit.ly/stellar-ucl)\n- Tools\n - [https://stellar.expert](https://stellar.expert)\n - [https://laboratory.stellar.org](https://laboratory.stellar.org)\n - [https://horizon-testnet.stellar.org](https://horizon-testnet.stellar.org)\n\n\n",
"_____no_output_____"
],
[
"# 1. Configure the SDK\n\nConfigure stellar_sdk to talk to the horizon instance hosted by Stellar.org.\nFor production applications, you should run your own instance of Horizon, but for testing & development using the SDF url is fine.\n\nStellar has two official networks, the live public network, and the testnet for testing & development. For this demo we'll be using the testnet.",
"_____no_output_____"
]
],
[
[
"import requests\nimport stellar_sdk\n\n# Configure StellarSdk to talk to the horizon instance hosted by Stellar.org\n# To use the live network, set the hostname to 'horizon.stellar.org'\nhorizon_url = \"https://horizon-testnet.stellar.org\"\nhorizon = stellar_sdk.Server(horizon_url=horizon_url)",
"_____no_output_____"
]
],
[
[
"The Stellar network's native asset is the \"lumen\", or \"XLM\". It is used to pay network fees. When it is used, it is destroyed.",
"_____no_output_____"
]
],
[
[
"xlm = stellar_sdk.Asset.native()",
"_____no_output_____"
]
],
[
[
"Base fee of network operations (in stroops).\n\n`100 stroops = 0.0000100 XLM = ~USD$0.0000035`\n\nNote: Higher fees might be required when the network is under heavy usage.",
"_____no_output_____"
]
],
[
[
"base_fee = 100",
"_____no_output_____"
]
],
[
[
"# 2. Assets & Payments",
"_____no_output_____"
],
[
"## 2.1. Create the Accounts\n\nFor this demo we'll need two accounts. A \"professor\", and a \"student\".\n\nThe professor will:\n- Issues the Xu asset to the student.\n- Provides liquidity to buy/sell Xu.\n\nThe student will:\n- Receive/Buy Xu.\n- Uses it to pay for a timeslot.\n\nStellar is account-based, not UTXO-based. There is an on-chain representation of each account and it's balances.\n\nAccounts must maintain a minimum reserve balance of lumens (XLM).\n\nAccounts are created/funded on-chain by existing accounts.\n\nOn testnet we use a faucet called “friendbot”, to create the accounts. Friendbot will give each account 10,000 XLM.\n\n(This `create_account` function is taken from the python SDK docs)\n",
"_____no_output_____"
]
],
[
[
"def create_account(name):\n \"\"\"Create an account on the testnet.\"\"\"\n key_pair = stellar_sdk.Keypair.random()\n url = \"https://friendbot.stellar.org\"\n _response = requests.get(url, params={\"addr\": key_pair.public_key})\n # Check _response.json() in case something goes wrong\n print(f\"{name} Public Key: {key_pair.public_key}\")\n print(f\"{name} Secret Seed: {key_pair.secret}\")\n print(f\"{name} URL: {horizon_url}/accounts/{key_pair.public_key}\")\n return key_pair\n\nprofessor_keys = create_account(\"Professor\")\nstudent_keys = create_account(\"Student\")",
"Professor Public Key: GDNQVUUGK2MZBZ7SKFPYWC2WSRALGT2A2ND6DQUHH66WS6LDCUR2AH5I\nProfessor Secret Seed: SCFJGTAIID65P32QKEKZGXSSAI6ZXM4ROAWIJZK33VLGA4IUACZ6JEYY\nProfessor URL: https://horizon-testnet.stellar.org/accounts/GDNQVUUGK2MZBZ7SKFPYWC2WSRALGT2A2ND6DQUHH66WS6LDCUR2AH5I\nStudent Public Key: GB3O4WKQTMRUF7KTNXEUSBLF2GID3QVI4SHPN5NLWGP5PH7LUVZQWON6\nStudent Secret Seed: SAZGG26Y7I4UA3WVPML2VG7LLIUZWMHT22MN3DB5CBWRPQSF6T7U4WWC\nStudent URL: https://horizon-testnet.stellar.org/accounts/GB3O4WKQTMRUF7KTNXEUSBLF2GID3QVI4SHPN5NLWGP5PH7LUVZQWON6\n"
]
],
[
[
"Transactions require a valid sequence number that is specific to the sender's account.\nWe can fetch the current sequence number for the source account from Horizon.",
"_____no_output_____"
]
],
[
[
"professor_account = horizon.load_account(professor_keys.public_key)\nstudent_account = horizon.load_account(student_keys.public_key)",
"_____no_output_____"
]
],
[
[
"\n## 2.2. Defining Our Asset\n\nAssets are identified by: `Code:Issuer`.",
"_____no_output_____"
]
],
[
[
"# Define our asset identifier\nxu = stellar_sdk.Asset(\"XU\", professor_keys.public_key)\nprint(f\"XU Asset: {xu.code}:{xu.issuer}\")\n",
"_____no_output_____"
]
],
[
[
"## 2.3. Student Establishes a Trustline for the Asset\n\nAnyone can issue an asset on stellar.\n\nYou want to make sure you’re using the “right” one.\n\nA trustline is an explicit opt-in to hold a particular token, so it specifies both asset code and issuer.\n\nLimits your account to the subset of all assets that you trust.\n\n\n\nStudent will establish a trustline to the XU asset issued by the issuer.",
"_____no_output_____"
]
],
[
[
"transaction = (\n stellar_sdk.TransactionBuilder(\n source_account=student_account,\n network_passphrase=stellar_sdk.Network.TESTNET_NETWORK_PASSPHRASE,\n base_fee=base_fee,\n )\n # we need a trust line for the xu asset\n .append_change_trust_op(asset=xu)\n .set_timeout(30) # Make this transaction valid for the next 30 seconds only\n .build()\n)\n\n# sign & submit the transaction\ntransaction.sign(student_keys)\nresponse = horizon.submit_transaction(transaction)\nprint(f\"{horizon_url}/transactions/{response['id']}\")",
"_____no_output_____"
]
],
[
[
"## 2.4. Professor Issues Some XU to the Student\n\nAssets are created when the issuer makes a payment.\n\nThe professor pays the student 30 XU, creating the asset.\n",
"_____no_output_____"
]
],
[
[
"transaction = (\n stellar_sdk.TransactionBuilder(\n source_account=professor_account,\n network_passphrase=stellar_sdk.Network.TESTNET_NETWORK_PASSPHRASE,\n base_fee=base_fee,\n )\n # issue 30 xu to the student\n .append_payment_op(\n destination=student_keys.public_key,\n asset=xu,\n amount=\"30.0000000\",\n )\n .set_timeout(30) # Make this transaction valid for the next 30 seconds only\n .build()\n)\n\n# sign & submit the transaction\ntransaction.sign(professor_keys)\nresponse = horizon.submit_transaction(transaction)\nprint(f\"{horizon_url}/transactions/{response['id']}\")",
"_____no_output_____"
]
],
[
[
"## 2.5. Student Spends XU to Book a Timeslot\n\nTo book a timeslot, the student will pay some XU to the professor.\n\nEach transaction can have a memo attached, to help applications differentiate, and transfer extra data.",
"_____no_output_____"
]
],
[
[
"transaction = (\n stellar_sdk.TransactionBuilder(\n source_account=student_account,\n network_passphrase=stellar_sdk.Network.TESTNET_NETWORK_PASSPHRASE,\n base_fee=base_fee,\n )\n # spend 30 xu to book a 30-minute slot\n .append_payment_op(\n destination=professor_keys.public_key,\n asset=xu,\n amount=\"30.0000000\",\n )\n .add_text_memo(\"2021-11-26T12:00Z\")\n .set_timeout(30) # Make this transaction valid for the next 30 seconds only\n .build()\n)\n\n# sign & submit the transaction\ntransaction.sign(student_keys)\nresponse = horizon.submit_transaction(transaction)\nprint(f\"{horizon_url}/transactions/{response['id']}\")",
"_____no_output_____"
]
],
[
[
"# 3. The DEX\n\nStellar has a DEX (decentralised exchange) built into the protocol, for doing currency conversion and exchange.\n\nProfessor Xu is business-savvy, and decides that if students want more office hours, they should pay for them.\n\nThe professor decides to sell their office hours on the DEX.",
"_____no_output_____"
],
[
"## 3.1. Professor Adds Liquidity to the DEX\n\nMarket Makers provide liquidity in the DEX (via Orderbook & Liquidity Pools). For this workshop, the professor will be a market-maker, placing sell offers into the orderbook.\n\nTraders use the liquidity in the DEX to atomically convert assets via “Path Payments”.\n\n",
"_____no_output_____"
]
],
[
[
"# build the transaction\ntransaction = (\n stellar_sdk.TransactionBuilder(\n source_account=professor_account,\n network_passphrase=stellar_sdk.Network.TESTNET_NETWORK_PASSPHRASE,\n base_fee=base_fee,\n )\n # Add a \"manage sell offer\" operation to the transaction\n .append_manage_sell_offer_op(\n selling=xu,\n buying=xlm,\n amount=\"1000.0000000\",\n price=stellar_sdk.Price(1, 1),\n )\n .set_timeout(30) # Make this transaction valid for the next 30 seconds only\n .build()\n)\n\n# sign & submit the transaction\ntransaction.sign(professor_keys)\nresponse = horizon.submit_transaction(transaction)\nprint(f\"{horizon_url}/transactions/{response['id']}\")",
"_____no_output_____"
]
],
[
[
"## 3.2. Student Buys XU from the DEX\n\nPath payments are the interface to the DEX, and how assets are converted.\n\nWhen converting `X -> Y`, you can either specify the amount of `X` you are sending, or the amount of `Y` you'd like the destination to receive.\n\nNote: The destination can be your own (or any other) account!",
"_____no_output_____"
]
],
[
[
"transaction = (\n stellar_sdk.TransactionBuilder(\n source_account=student_account,\n network_passphrase=stellar_sdk.Network.TESTNET_NETWORK_PASSPHRASE,\n base_fee=base_fee,\n )\n # Buy 30 xu token\n .append_path_payment_strict_receive_op(\n destination=student_keys.public_key,\n send_asset=xlm, send_max=\"30.0000000\",\n dest_asset=xu, dest_amount=\"30.0000000\",\n path=[xlm, xu]\n )\n .set_timeout(30) # Make this transaction valid for the next 30 seconds only\n .build()\n)\n\n# sign & submit the transaction\ntransaction.sign(student_keys)\nresponse = horizon.submit_transaction(transaction)\nprint(f\"{horizon_url}/transactions/{response['id']}\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7330024aa5d74e86fb2c4e18f1d43991c716884 | 234,253 | ipynb | Jupyter Notebook | p3.ipynb | propellingbits/CarND-P3-Behavioral-Cloning | be35d0ece3db41eb24afaa7eaf3db15faeafffb0 | [
"MIT"
] | null | null | null | p3.ipynb | propellingbits/CarND-P3-Behavioral-Cloning | be35d0ece3db41eb24afaa7eaf3db15faeafffb0 | [
"MIT"
] | null | null | null | p3.ipynb | propellingbits/CarND-P3-Behavioral-Cloning | be35d0ece3db41eb24afaa7eaf3db15faeafffb0 | [
"MIT"
] | null | null | null | 251.614393 | 193,290 | 0.893854 | [
[
[
"import matplotlib.pyplot as plt\nimport random \nimport csv\nimport numpy as np\nimport cv2\nimport sklearn\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\nimport pandas as pd\nimport math\n\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nlines = []\ncenter_angle = 0\ncenter_angle_o = 0\ncenterAngleImgCount = 0\nangleCol = np.zeros((10, 1000, 1))\nrowCount = 0\nscopeIndex = 0 #index for that section\n\nwith open('./data/driving_log.csv') as csvfile:\n reader = csv.reader(csvfile)\n \n for line in reader:\n if line[3] == 'steering':\n continue\n center_angle = round(float(line[3]), 2)\n center_angle_o = line[3]\n if centerAngleImgCount > 1500 and center_angle == 0:\n continue\n elif center_angle == 0:\n centerAngleImgCount += 1\n \n lines.append(center_angle)\n \n if (center_angle >= .50 and center_angle <= .60):\n #count = 7\n scopeIndex = 0\n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[scopeIndex][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount #keeping reference to lines array index for this item\n elif (center_angle >= .30 and center_angle <= .40):\n #count = 5\n scopeIndex = 1\n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[scopeIndex][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n \n elif (center_angle >= .20 and center_angle <= .30):\n #count = 4 \n scopeIndex = 2 \n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[scopeIndex][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n elif (center_angle >= .10 and center_angle <= .20):\n #count = 3\n scopeIndex = 3\n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[0][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n elif (center_angle <= -.01 and center_angle >= -.10):\n #count = 3\n scopeIndex = 4\n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[0][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n elif (center_angle <= -.10 and center_angle >= -.25):\n #count = 6\n scopeIndex = 5\n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[scopeIndex][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n elif (center_angle <= -.25 and center_angle >= -.35):\n #count = 8\n scopeIndex = 6\n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[scopeIndex][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n elif (center_angle <= -.35 and center_angle >= -.45):\n #count = 10\n scopeIndex = 7\n \n lastItemIndex = int(angleCol[scopeIndex])\n angleCol[scopeIndex] += 1 #total item count at this index\n angleCol[scopeIndex][lastItemIndex] = center_angle_o #random steering angle value\n angleCol[scopeIndex] += 1\n angleCol[scopeIndex][0][0] = rowCount\n \n rowCount += 1\n #for x in range (0, count):\n # lines.append(center_angle)\n\n#skipping the headers\n#lines = lines[1:] \nn_angles = len(lines)\n#print (lines)\n#plt.hist(lines, n_angles)\n#plt.show()\nprint ()\nstart = 0\nmin =0\nmax = 0\nstop = np.amax(angleCol[0::])\n#print(angleCol[0::])\nprint (stop)\nfor i in range(len(angleCol)):\n start = angleCol[i]\n min = np.amin(angleCol[:1:])\n max = np.amax(angleCol[:1:])\n for j in (start, stop):\n angleCol[i][j] = random.uniform(min, max)\n\ndef print_train_labels():\n print()\n print(\"Samples distribution:\")\n print(\"%-50s%-32s\" % (\"Label\", \"|Count\"))\n histogram = np.histogram(lines, bins=np.arange(9000))\n for i in range(len(histogram[0])):\n print(\"%-50s|%-32d\" % (lines[histogram[1][i]], histogram[0][i]))\n\n\n#print_train_labels()\n#lines = lines.reshape((lines.shape[0],)) \n\n \n#ax = pd.DataFrame({'X':lines, 'Y':lines}).plot()\n\nplt.hist(lines, bins= 50, color= 'red')\nplt.xlabel('steering value')\nplt.ylabel('counts')",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport random \nimport csv\nimport numpy as np\nimport cv2\nimport sklearn\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\nimport pandas as pd\nimport math\n\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\n\nfileName1 = \"./data/IMG/center_2016_12_01_13_32_45_275.jpg\"\nfileName2 = \"./data/IMG/left_2016_12_01_13_32_45_275.jpg\"\nfileName3 = \"./data/IMG/right_2016_12_01_13_32_45_275.jpg\"\n\n\ndef normalize(image):\n #return (image/255 - 0.5)\n return image / 127.5 - 1\n\ndef blur(img, kernel_size):\n return cv2.blur(img, (kernel_size, kernel_size))\n\ndef cropImage(img):\n return img[60:140, :, :] #height, width, color channels\n\ndef resizeImage(img):\n #cv2.resize(img, (cols (width), rows (height)))\n img = cv2.resize(img, (66, 200))\n return img\n\ndef rgb2yuv(image):\n image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)\n return image\n\ndef flipVertical(image):\n image = cv2.flip(image, 1)\n return image\n\ndef hsv(image):\n \n image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)\n \n #print (image.shape)\n return image\n\n\ndef preprocessImage(img):\n #cv2.resize(img, (cols (width), rows (height)))\n ##img = cv2.resize(img, (200, 60))\n # img = cv2.resize(img, (80,40))\n # img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n # img = img[15:40, 0:80]\n # #print (image.shape)\n # return img[:,:,1]\n # B,G,R channels of image index. We are locating grey channel \n # #http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html - image array 3 channel index \n #img[:, :, 2] = img[:, :, 2] * brightness\n #img = normalize(blur(hsv(img)[:,:,1], kernel_size=5))\n #img = blur(hsv(img)[:,:,1], kernel_size=5)\n ##img = blur(img, kernel_size=5)\n croppedImage = cropImage(img)\n resizedImage = resizeImage(croppedImage)\n #rgb2yuved = rgb2yuv(resizedImage)\n hsved = hsv(resizedImage)[:,:,1]\n blurred = blur(hsved, 5)\n processedImg = blurred\n return processedImg\n\ncenter_image = cv2.imread(fileName1)\nleft_image = cv2.imread(fileName2)\nright_image = cv2.imread(fileName3)\n\nflipped_image = np.fliplr(center_image)\n\ncroppedImage = cropImage(right_image)\n\n\ncenter_image = resizeImage(croppedImage)\n#resizedImage = resizeImage(croppedImage)\n#rgb2yuved = rgb2yuv(resizedImage)\n#hsved = hsv(resizedImage)\n#right_image =hsved\nright_image = flipVertical(right_image)\n\n#right_image = cv2.resize(right_image, (80,40))\n#right_image = right_image[15:40, 0:80]\n#right_image = right_image[10:20,:,:]\n\n\n\nfig = plt.figure(figsize=(10,5))\n\naxis = fig.add_subplot(2,2,1)\naxis.set_xlabel('center image')\nplt.xticks(np.array([]))\nplt.yticks(np.array([]))\naxis.imshow(center_image)\n\n\naxis = fig.add_subplot(2,2,2)\naxis.set_xlabel('left image')\nplt.xticks(np.array([]))\nplt.yticks(np.array([]))\naxis.imshow(cv2.flip(center_image,1))\n\naxis = fig.add_subplot(2,2,3)\naxis.set_xlabel('right image')\nplt.xticks(np.array([]))\nplt.yticks(np.array([]))\naxis.imshow(right_image)\n\naxis = fig.add_subplot(2,2,4)\naxis.set_xlabel('flipped image')\nplt.xticks(np.array([]))\nplt.yticks(np.array([]))\naxis.imshow(flipped_image)\n\n",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport random \nimport csv\nimport numpy as np\nimport cv2\nimport sklearn\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\nimport pandas as pd\nimport math\nfrom sklearn.utils import shuffle\n\n#right approach - current\n\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nlines = []\nlines1 = []\nlinesfull = []\ncenter_angle = 0\ncenter_angle_o = 0\ncenterAngleImgCount = 0\nangleCol = np.zeros((10, 1000, 1))\nrowCount = 0\nscopeIndex = 0\nsteering = []\nsteering_o = []\nunique_steering = []\nsteering_dict = {}\nangleZeroImages = []\nangleNonZeroImages = []\n\nwith open('./data/driving_log.csv') as csvfile:\n reader = csv.reader(csvfile)\n \n for line in reader:\n if line[3] == 'steering':\n continue\n if float(line[3]) < .01:\n continue\n center_angle = float(line[3])\n #center_angle = float(line[3])\n if center_angle >= -0.1 and center_angle <= 0.1:\n angleZeroImages.append(center_angle)\n else:\n angleNonZeroImages.append(center_angle)\n \n #steering_o.append(line[3])\n #if centerAngleImgCount > 500 and center_angle >= -0.02 and center_angle <= 0.01:\n # continue\n #elif center_angle >= -0.02 and center_angle <= 0.01:\n # centerAngleImgCount += 1\n \n #lines.append(center_angle)\n #linesFull.append(center_angle)\n #steering.append(center_angle)\n \n\n\nshuffle(angleZeroImages)\nprint (len(angleZeroImages))\nprint (len(angleNonZeroImages))\nshuffle(angleNonZeroImages)\nlines = angleZeroImages[0:500]\nlines.extend(angleNonZeroImages[0:2200])\nlines2 =[]\n #uniq_steering = []\n #unique_steering = np.unique(steering)\n #unique_steering = np.unique(steering_o)\n #for i in range(len(unique_steering)):\n #steering_dict[str(unique_steering[i])] += 1\n #print(len(steering[unique_steering[i]]))\nprint(len(lines))\n#plt.hist(lines, bins= 50, color= 'red')\n#plt.xlabel('steering value')\n#plt.ylabel('counts')\nprint('here')\n#print(len(lines)) current\n\nfor i in range(len(lines)):\n \n center_angle = float(lines[i]) \n\n #images.append(center_image)\n #angles.append(center_angle)\n #lines2.append(center_angle)\n #lines2.append(center_angle)\n #if(center_angle > .15 or center_angle < -.15):\n #images.append(flipVertical(center_image))\n #angles.append(-center_angle)\n lines2.append(-center_angle)\n #center_angle >= 0.01 or center_angle <= -.01\n correction = 0.25 # this is a parameter to tune\n\n #left\n\n #left_angle = float(center_angle) + correction\n\n #if (center_angle >= 0.01 or center_angle <= -.01):\n #images.append(left_image)\n #angles.append(left_angle)\n #lines2.append(left_angle)\n #left flipped \n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n\n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n #print ('ca-')\n #print (center_angle)\n #print ('--')\n #print (0.01*random.uniform(2, 3.5))\n #if(center_angle > .10 or center_angle < -.10):\n lines2.append(center_angle + correction)\n\n # right images, stripping starting space\n #imagesPaths.append(imagesPathsAll[i][2].replace(\" \", \"\"))\n # randomly adding angles for right images\n lines2.append(center_angle - correction)\n\n \n #right_angle = float(center_angle) - correction\n #images.append(right_image)\n #angles.append(right_angle)\n #ines2.append(right_angle)\n \n\n#lines2.extend(lines)\n#lines2.extend(lines2)\nprint(len(lines2))\nplt.hist(lines2, bins= 50, color= 'red')\nplt.xlabel('steering value')\nplt.ylabel('counts')",
"712\n1176\n1676\nhere\n5028\n"
],
[
"import matplotlib.pyplot as plt\nimport random \nimport csv\nimport numpy as np\nimport cv2\nimport sklearn\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\nimport pandas as pd\nimport math\nfrom sklearn.utils import shuffle\n\n#right approach\n\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nlines = []\nlines1 = []\nlinesfull = []\ncenter_angle = 0\ncenter_angle_o = 0\ncenterAngleImgCount = 0\nangleCol = np.zeros((10, 1000, 1))\nrowCount = 0\nscopeIndex = 0\nsteering = []\nsteering_o = []\nunique_steering = []\nsteering_dict = {}\nangleZeroImages = []\nangleNonZeroImages = []\n\nwith open('./data/driving_log.csv') as csvfile:\n reader = csv.reader(csvfile)\n \n for line in reader:\n if line[3] == 'steering':\n continue\n center_angle = round(float(line[3]), 2)\n #center_angle = float(line[3])\n if center_angle >= -0.1 and center_angle <= 0.1:\n angleZeroImages.append(center_angle)\n else:\n angleNonZeroImages.append(center_angle)\n \n #steering_o.append(line[3])\n #if centerAngleImgCount > 500 and center_angle >= -0.02 and center_angle <= 0.01:\n # continue\n #elif center_angle >= -0.02 and center_angle <= 0.01:\n # centerAngleImgCount += 1\n \n #lines.append(center_angle)\n #linesFull.append(center_angle)\n #steering.append(center_angle)\n \n\n\nshuffle(angleZeroImages)\nprint (len(angleZeroImages))\nprint (len(angleNonZeroImages))\nshuffle(angleNonZeroImages)\nlines = angleZeroImages[0:500]\nlines.extend(angleNonZeroImages[0:2000])\nlines2 = []\n #uniq_steering = []\n #unique_steering = np.unique(steering)\n #unique_steering = np.unique(steering_o)\n #for i in range(len(unique_steering)):\n #steering_dict[str(unique_steering[i])] += 1\n #print(len(steering[unique_steering[i]]))\nprint(len(lines))\n#plt.hist(lines, bins= 50, color= 'red')\n#plt.xlabel('steering value')\n#plt.ylabel('counts')\nprint('here')\nprint(len(lines))\n\nfor i in range(len(lines)):\n \n center_angle = float(lines[i]) \n\n #images.append(center_image)\n #angles.append(center_angle)\n #lines2.append(center_angle)\n lines2.append(center_angle)\n if(center_angle > .20 or center_angle < -.20):\n #images.append(flipVertical(center_image))\n #angles.append(-center_angle)\n lines2.append(-center_angle)\n #center_angle >= 0.01 or center_angle <= -.01\n else:\n #correction = 0.25 # this is a parameter to tune\n\n #left\n\n #left_angle = float(center_angle) + correction\n\n #if (center_angle >= 0.01 or center_angle <= -.01):\n #images.append(left_image)\n #angles.append(left_angle)\n #lines2.append(left_angle)\n #left flipped \n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n\n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n #print ('ca-')\n #print (center_angle)\n #print ('--')\n #print (0.01*random.uniform(2, 3.5))\n\n lines2.append(center_angle + random.uniform(.01, .07))\n\n # right images, stripping starting space\n #imagesPaths.append(imagesPathsAll[i][2].replace(\" \", \"\"))\n # randomly adding angles for right images\n lines2.append(center_angle - random.uniform(.01, .07))\n\n \n #right_angle = float(center_angle) - correction\n #images.append(right_image)\n #angles.append(right_angle)\n #ines2.append(right_angle)\n \n\n#lines2.extend(lines)\nprint(len(lines2))\nplt.hist(lines2, bins= 50, color= 'red')\nplt.xlabel('steering value')\nplt.ylabel('counts')import matplotlib.pyplot as plt\nimport random \nimport csv\nimport numpy as np\nimport cv2\nimport sklearn\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\nimport pandas as pd\nimport math\nfrom sklearn.utils import shuffle\n\n#right approach\n\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nlines = []\nlines1 = []\nlinesfull = []\ncenter_angle = 0\ncenter_angle_o = 0\ncenterAngleImgCount = 0\nangleCol = np.zeros((10, 1000, 1))\nrowCount = 0\nscopeIndex = 0\nsteering = []\nsteering_o = []\nunique_steering = []\nsteering_dict = {}\nangleZeroImages = []\nangleNonZeroImages = []\n\nwith open('./data/driving_log.csv') as csvfile:\n reader = csv.reader(csvfile)\n \n for line in reader:\n if line[3] == 'steering':\n continue\n center_angle = round(float(line[3]), 2)\n #center_angle = float(line[3])\n if center_angle >= -0.1 and center_angle <= 0.1:\n angleZeroImages.append(center_angle)\n else:\n angleNonZeroImages.append(center_angle)\n \n #steering_o.append(line[3])\n #if centerAngleImgCount > 500 and center_angle >= -0.02 and center_angle <= 0.01:\n # continue\n #elif center_angle >= -0.02 and center_angle <= 0.01:\n # centerAngleImgCount += 1\n \n #lines.append(center_angle)\n #linesFull.append(center_angle)\n #steering.append(center_angle)\n \n\n\nshuffle(angleZeroImages)\nprint (len(angleZeroImages))\nprint (len(angleNonZeroImages))\nshuffle(angleNonZeroImages)\nlines = angleZeroImages[0:500]\nlines.extend(angleNonZeroImages[0:2000])\nlines2 = []\n #uniq_steering = []\n #unique_steering = np.unique(steering)\n #unique_steering = np.unique(steering_o)\n #for i in range(len(unique_steering)):\n #steering_dict[str(unique_steering[i])] += 1\n #print(len(steering[unique_steering[i]]))\nprint(len(lines))\n#plt.hist(lines, bins= 50, color= 'red')\n#plt.xlabel('steering value')\n#plt.ylabel('counts')\nprint('here')\nprint(len(lines))\n\nfor i in range(len(lines)):\n \n center_angle = float(lines[i]) \n\n #images.append(center_image)\n #angles.append(center_angle)\n #lines2.append(center_angle)\n #lines2.append(center_angle)\n if(center_angle > .20 or center_angle < -.20):\n #images.append(flipVertical(center_image))\n #angles.append(-center_angle)\n lines2.append(-center_angle)\n #center_angle >= 0.01 or center_angle <= -.01\n else:\n #correction = 0.25 # this is a parameter to tune\n\n #left\n\n #left_angle = float(center_angle) + correction\n\n #if (center_angle >= 0.01 or center_angle <= -.01):\n #images.append(left_image)\n #angles.append(left_angle)\n #lines2.append(left_angle)\n #left flipped \n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n\n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n #print ('ca-')\n #print (center_angle)\n #print ('--')\n #print (0.01*random.uniform(2, 3.5))\n\n lines2.append(center_angle + random.uniform(.01, .07))\n\n # right images, stripping starting space\n #imagesPaths.append(imagesPathsAll[i][2].replace(\" \", \"\"))\n # randomly adding angles for right images\n lines2.append(center_angle - random.uniform(.01, .07))\n\n \n #right_angle = float(center_angle) - correction\n #images.append(right_image)\n #angles.append(right_angle)\n #ines2.append(right_angle)\n \n\n#lines2.extend(lines)\nprint(len(lines2))\nplt.hist(lines2, bins= 50, color= 'red')\nplt.xlabel('steering value')\nplt.ylabel('counts')",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport random \nimport csv\nimport numpy as np\nimport cv2\nimport sklearn\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\nimport pandas as pd\nimport math\nfrom sklearn.utils import shuffle\n\n#right approach\n\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nlines = []\nlines1 = []\nlinesfull = []\ncenter_angle = 0\ncenter_angle_o = 0\ncenterAngleImgCount = 0\nangleCol = np.zeros((10, 1000, 1))\nrowCount = 0\nscopeIndex = 0\nsteering = []\nsteering_o = []\nunique_steering = []\nsteering_dict = {}\nangleZeroImages = []\nangleNonZeroImages = []\n\nwith open('./data/driving_log.csv') as csvfile:\n reader = csv.reader(csvfile)\n \n for line in reader:\n if line[3] == 'steering':\n continue\n center_angle = round(float(line[3]), 2)\n #center_angle = float(line[3])\n if center_angle >= -0.1 and center_angle <= 0.1:\n angleZeroImages.append(center_angle)\n else:\n angleNonZeroImages.append(center_angle)\n \nshuffle(angleZeroImages)\nprint (len(angleZeroImages))\nprint (len(angleNonZeroImages))\nshuffle(angleNonZeroImages)\nlines = angleZeroImages[0:500]\nlines.extend(angleNonZeroImages[0:2000])\nlines2 = []\n #uniq_steering = []\n #unique_steering = np.unique(steering)\n #unique_steering = np.unique(steering_o)\n #for i in range(len(unique_steering)):\n #steering_dict[str(unique_steering[i])] += 1\n #print(len(steering[unique_steering[i]]))\nprint(len(lines))\n#plt.hist(lines, bins= 50, color= 'red')\n#plt.xlabel('steering value')\n#plt.ylabel('counts')\nprint('here')\nprint(len(lines))\n\nfor i in range(len(lines)):\n \n center_angle = float(lines[i]) \n\n #images.append(center_image)\n #angles.append(center_angle)\n #lines2.append(center_angle)\n #lines2.append(center_angle)\n if(center_angle > .20 or center_angle < -.20):\n #images.append(flipVertical(center_image))\n #angles.append(-center_angle)\n lines2.append(-center_angle)\n #center_angle >= 0.01 or center_angle <= -.01\n \n correction = 0.25 # this is a parameter to tune\n\n #left\n\n left_angle = float(center_angle) + correction\n\n #if (center_angle >= 0.01 or center_angle <= -.01):\n #images.append(left_image)\n lines2.append(left_angle)\n #lines2.append(left_angle)\n #left flipped \n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n\n #images.append(np.fliplr(left_image))\n #angles.append(-left_angle)\n #print ('ca-')\n #print (center_angle)\n #print ('--')\n #print (0.01*random.uniform(2, 3.5))\n\n #lines2.append(center_angle + random.uniform(.01, .07))\n\n # right images, stripping starting space\n #imagesPaths.append(imagesPathsAll[i][2].replace(\" \", \"\"))\n # randomly adding angles for right images\n #lines2.append(center_angle - random.uniform(.01, .07))\n\n \n right_angle = float(center_angle) - correction\n #images.append(right_image)\n #angles.append(right_angle)\n lines2.append(right_angle)\n \n\n#lines2.extend(lines)\nprint(len(lines2))\nplt.hist(lines2, bins= 50, color= 'red')\nplt.xlabel('steering value')\nplt.ylabel('counts')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7331fee2c949279503840779d639e3fe35ab04f | 13,166 | ipynb | Jupyter Notebook | examples/text/ktrain-ONNX-TFLite-examples.ipynb | husmen/ktrain | 4147b0bd146deb513c6f94505908294a5163efac | [
"Apache-2.0"
] | 1,013 | 2019-06-04T14:25:24.000Z | 2022-03-26T05:52:00.000Z | examples/text/ktrain-ONNX-TFLite-examples.ipynb | husmen/ktrain | 4147b0bd146deb513c6f94505908294a5163efac | [
"Apache-2.0"
] | 427 | 2019-06-17T13:45:50.000Z | 2022-03-25T16:23:49.000Z | examples/text/ktrain-ONNX-TFLite-examples.ipynb | husmen/ktrain | 4147b0bd146deb513c6f94505908294a5163efac | [
"Apache-2.0"
] | 272 | 2019-06-05T03:19:07.000Z | 2022-03-28T02:23:37.000Z | 42.063898 | 426 | 0.636564 | [
[
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\";\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"\" # Enforce CPU usage\nfrom psutil import cpu_count # Do \"pip install psutil\" if not already installed\nimport tensorflow as tf\nimport numpy as np\n\n# Constants from the performance optimization available in onnxruntime\n# It needs to be done before importing onnxruntime\nos.environ[\"OMP_NUM_THREADS\"] = str(cpu_count(logical=True))\nos.environ[\"OMP_WAIT_POLICY\"] = 'ACTIVE'",
"_____no_output_____"
]
],
[
[
"## ONNX and TensorFlow Lite Support in `ktrain`\n\nAs of v0.24.x, `predictors` in **ktrain** provide built-in support for exports to [ONNX](https://github.com/onnx/onnx) and [TensorFlow Lite](https://www.tensorflow.org/lite) formats. This allows you to more easily take a **ktrain**-trained model and use it to make predictions *outside* of **ktrain** (or even TensorFlow) in deployment scenarios. In this notebook, we will show a text classification example of this.\n\nLet us begin by loading a previously trained `Predictor` instance, which consists of both the **DistilBert** model and its associated `Preprocessor` instance. ",
"_____no_output_____"
]
],
[
[
"import ktrain\npredictor = ktrain.load_predictor('/tmp/my_distilbert_predictor')\nprint(predictor.model)\nprint(predictor.preproc)",
"<transformers.models.distilbert.modeling_tf_distilbert.TFDistilBertForSequenceClassification object at 0x7f929b30a710>\n<ktrain.text.preprocessor.Transformer object at 0x7f93ed5b88d0>\n"
]
],
[
[
"The cell above assumes that the model was previously trained on the 20 Newsgroup corpus using a GPU (e.g., on Google Colab). The files in question can be easily created with **ktrain**:\n\n```python\n# install ktrain\n!pip install ktrain\n\n# load text data\ncategories = ['alt.atheism', 'comp.graphics', 'sci.med', 'soc.religion.christian']\nfrom sklearn.datasets import fetch_20newsgroups\ntrain_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)\ntest_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)\n(x_train, y_train) = (train_b.data, train_b.target)\n(x_test, y_test) = (test_b.data, test_b.target)\n\n# build, train, and validate model (Transformer is wrapper around transformers library)\nimport ktrain\nfrom ktrain import text\nMODEL_NAME = 'distilbert-base-uncased'\nt = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)\ntrn = t.preprocess_train(x_train, y_train)\nval = t.preprocess_test(x_test, y_test)\nmodel = t.get_classifier()\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)\nlearner.fit_onecycle(5e-5, 1)\n\n# save predictor\npredictor = ktrain.get_predictor(learner.model, t)\npredictor.save('/tmp/my_distilbert_predictor')\n```",
"_____no_output_____"
],
[
"## TensorFlow Lite Inferences\n\nHere, we export our model to TensorFlow LITE and use it to make predictions *without* **ktrain**.",
"_____no_output_____"
]
],
[
[
"# export TensorFlow Lite model\ntflite_model_path = '/tmp/model.tflite'\ntflite_model_path = predictor.export_model_to_tflite(tflite_model_path)\n\n# load interpreter\ninterpreter = tf.lite.Interpreter(model_path=tflite_model_path)\ninterpreter.allocate_tensors()\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\n# set maxlen, class_names, and tokenizer (use settings employed when training the model - see above)\nmaxlen = 500 # from above\nclass_names = ['alt.atheism', 'comp.graphics', 'sci.med', 'soc.religion.christian'] # from above\nfrom transformers import AutoTokenizer\ntokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')\n\n# preprocess and predict outside of ktrain\ndoc = 'I received a chest x-ray at the hospital.'\ninputs = tokenizer(doc, max_length=maxlen, padding='max_length', truncation=True, return_tensors=\"tf\")\ninterpreter.set_tensor(input_details[0]['index'], inputs['attention_mask'])\ninterpreter.set_tensor(input_details[1]['index'], inputs['input_ids'])\ninterpreter.invoke()\noutput_tflite = interpreter.get_tensor(output_details[0]['index'])\nprint()\nprint('text input: %s' % (doc))\nprint()\nprint('predicted logits: %s' % (output_tflite))\nprint()\nprint(\"predicted class: %s\" % ( class_names[np.argmax(output_tflite[0])]) )",
"converting to TFLite format ... this may take a few moments...\n"
]
],
[
[
"## ONNX Inferences\n\nHere, we will export our trained model to ONNX and make predictions *outside* of both **ktrain** and **TensorFlow** using the ONNX runtime. Please ensure the ONNX libraries are installed before proceeding with:\n```\npip install -q --upgrade onnxruntime==1.5.1 onnxruntime-tools onnx keras2onnx\n```\n\nIt is possible to transform a TensorFlow model directly to ONNX using: `predictor.export_model_to_onnx(onnx_model_path)`, similar to what was done for TFLite above. However, for **transformers** models like the **DistilBERT** text classifier used in this example, it is recommended that the model first be converted to PyTorch and then to ONNX for better performance of the final ONNX model. \n\nIn the cell below, we use `AutoModelForSequenceClassification.from_pretrained` to load our classifier as a PyTorch model before converting to ONNX. We, then, use our ONNX model to make predictions **without** the need for ktrain or TensorFlow or PyTorch. This is well-suited for deployments that require smaller footprints (e.g., Heroku).",
"_____no_output_____"
]
],
[
[
"# set maxlen, class_names, and tokenizer (use settings employed when training the model - see above)\nmodel_name = 'distilbert-base-uncased'\nmaxlen = 500 # from above\nclass_names = ['alt.atheism', 'comp.graphics', 'sci.med', 'soc.religion.christian'] # from above\nfrom transformers import AutoTokenizer\ntokenizer = AutoTokenizer.from_pretrained(model_name)\n\n\n# imports\nimport numpy as np\nfrom transformers.convert_graph_to_onnx import convert, optimize, quantize\nfrom transformers import AutoModelForSequenceClassification\nfrom pathlib import Path\n\n# paths\npredictor_path = '/tmp/my_distilbert_predictor'\npt_path = predictor_path+'_pt'\npt_onnx_path = pt_path +'_onnx/model.onnx'\n\n# convert to ONNX\nAutoModelForSequenceClassification.from_pretrained(predictor_path, \n from_tf=True).save_pretrained(pt_path)\nconvert(framework='pt', model=pt_path,output=Path(pt_onnx_path), opset=11, \n tokenizer=model_name, pipeline_name='sentiment-analysis')\npt_onnx_quantized_path = quantize(optimize(Path(pt_onnx_path)))\n\n# create ONNX session\ndef create_onnx_session(onnx_model_path, provider='CPUExecutionProvider'):\n \"\"\"\n Creates ONNX inference session from provided onnx_model_path\n \"\"\"\n\n from onnxruntime import GraphOptimizationLevel, InferenceSession, SessionOptions, get_all_providers\n assert provider in get_all_providers(), f\"provider {provider} not found, {get_all_providers()}\"\n\n # Few properties that might have an impact on performances (provided by MS)\n options = SessionOptions()\n options.intra_op_num_threads = 0\n options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL\n\n # Load the model as a graph and prepare the CPU backend \n session = InferenceSession(onnx_model_path, options, providers=[provider])\n session.disable_fallback()\n return session\nsess = create_onnx_session(pt_onnx_quantized_path.as_posix())\n\n# tokenize document and make prediction\ntokens = tokenizer.encode_plus('I received a chest x-ray at the hospital.', max_length=maxlen, truncation=True)\ntokens = {name: np.atleast_2d(value) for name, value in tokens.items()}\nprint()\nprint()\nprint(\"predicted class: %s\" % (class_names[np.argmax(sess.run(None, tokens)[0])]))",
"ONNX opset version set to: 11\nLoading pipeline (model: /tmp/my_distilbert_predictor_pt, tokenizer: distilbert-base-uncased)\nCreating folder /tmp/my_distilbert_predictor_pt_onnx\nUsing framework PyTorch: 1.8.0\nFound input input_ids with shape: {0: 'batch', 1: 'sequence'}\nFound input attention_mask with shape: {0: 'batch', 1: 'sequence'}\nFound output output_0 with shape: {0: 'batch'}\nEnsuring inputs are in correct order\nhead_mask is not present in the generated input list.\nGenerated inputs order: ['input_ids', 'attention_mask']\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e733269ebdf58ce80db6c9de79b34b12e6bd3ae1 | 63,263 | ipynb | Jupyter Notebook | docs_src/layers.ipynb | xnutsive/fastai | 8b12289dfe589c8e7ba9c9fa878bd524d195043e | [
"Apache-2.0"
] | 1 | 2019-03-31T09:07:00.000Z | 2019-03-31T09:07:00.000Z | docs_src/layers.ipynb | xnutsive/fastai | 8b12289dfe589c8e7ba9c9fa878bd524d195043e | [
"Apache-2.0"
] | null | null | null | docs_src/layers.ipynb | xnutsive/fastai | 8b12289dfe589c8e7ba9c9fa878bd524d195043e | [
"Apache-2.0"
] | null | null | null | 31.087469 | 647 | 0.541533 | [
[
[
"# Model Layers",
"_____no_output_____"
],
[
"This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers.",
"_____no_output_____"
]
],
[
[
"from fastai.vision import *\nfrom fastai.gen_doc.nbdoc import *",
"_____no_output_____"
]
],
[
[
"## Custom fastai modules",
"_____no_output_____"
]
],
[
[
"show_doc(AdaptiveConcatPool2d, title_level=3)",
"_____no_output_____"
],
[
"from fastai.gen_doc.nbdoc import *\nfrom fastai.layers import * ",
"_____no_output_____"
]
],
[
[
"The output will be `2*sz`, or just 2 if `sz` is None.",
"_____no_output_____"
],
[
"The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.\n\nLet's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.\n\nWe will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adapative Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_SAMPLE)\ndata = ImageDataBunch.from_folder(path)",
"_____no_output_____"
],
[
"def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,\n strides:Collection[int]=None) -> nn.Sequential:\n \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n nl = len(actns)-1\n kernel_szs = ifnone(kernel_szs, [3]*nl)\n strides = ifnone(strides , [2]*nl)\n layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n for i in range(len(strides))]\n layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))\n return nn.Sequential(*layers)",
"_____no_output_____"
],
[
"model = simple_cnn_max((3,16,16,2))\nlearner = Learner(data, model, metrics=[accuracy])\nlearner.fit(1)",
"_____no_output_____"
]
],
[
[
"Now let's try with [Adapative Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now.",
"_____no_output_____"
]
],
[
[
"def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,\n strides:Collection[int]=None) -> nn.Sequential:\n \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n nl = len(actns)-1\n kernel_szs = ifnone(kernel_szs, [3]*nl)\n strides = ifnone(strides , [2]*nl)\n layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n for i in range(len(strides))]\n layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))\n return nn.Sequential(*layers)",
"_____no_output_____"
],
[
"model = simple_cnn_avg((3,16,16,2))\nlearner = Learner(data, model, metrics=[accuracy])\nlearner.fit(1)",
"_____no_output_____"
]
],
[
[
"Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!",
"_____no_output_____"
]
],
[
[
"def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,\n strides:Collection[int]=None) -> nn.Sequential:\n \"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`\"\n nl = len(actns)-1\n kernel_szs = ifnone(kernel_szs, [3]*nl)\n strides = ifnone(strides , [2]*nl)\n layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])\n for i in range(len(strides))]\n layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))\n return nn.Sequential(*layers)",
"_____no_output_____"
],
[
"model = simple_cnn((3,16,16,2))\nlearner = Learner(data, model, metrics=[accuracy])\nlearner.fit(1)",
"_____no_output_____"
],
[
"show_doc(Lambda, title_level=3)",
"_____no_output_____"
]
],
[
[
"This is very useful to use functions as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object. So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:\n\n`Lambda(lambda x: x.view(x.size(0),-1))`",
"_____no_output_____"
],
[
"Let's see an example of how the shape of our output can change when we add this layer.",
"_____no_output_____"
]
],
[
[
"model = nn.Sequential(\n nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.AdaptiveAvgPool2d(1),\n)\n\nmodel.cuda()\n\nfor xb, yb in data.train_dl:\n out = (model(*[xb]))\n print(out.size())\n break",
"torch.Size([64, 10, 1, 1])\n"
],
[
"model = nn.Sequential(\n nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.AdaptiveAvgPool2d(1),\n Lambda(lambda x: x.view(x.size(0),-1))\n)\n\nmodel.cuda()\n\nfor xb, yb in data.train_dl:\n out = (model(*[xb]))\n print(out.size())\n break",
"torch.Size([64, 10])\n"
],
[
"show_doc(Flatten)",
"_____no_output_____"
]
],
[
[
"The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it.",
"_____no_output_____"
]
],
[
[
"model = nn.Sequential(\n nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.AdaptiveAvgPool2d(1),\n Flatten(),\n)\n\nmodel.cuda()\n\nfor xb, yb in data.train_dl:\n out = (model(*[xb]))\n print(out.size())\n break",
"torch.Size([64, 10])\n"
],
[
"show_doc(PoolFlatten)",
"_____no_output_____"
]
],
[
[
"We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten).",
"_____no_output_____"
]
],
[
[
"model = nn.Sequential(\n nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n PoolFlatten()\n)\n\nmodel.cuda()\n\nfor xb, yb in data.train_dl:\n out = (model(*[xb]))\n print(out.size())\n break",
"torch.Size([64, 10])\n"
]
],
[
[
"Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one.",
"_____no_output_____"
]
],
[
[
"show_doc(ResizeBatch)",
"_____no_output_____"
],
[
"a = torch.tensor([[1., -1.], [1., -1.]])\nprint(a)",
"tensor([[ 1., -1.],\n [ 1., -1.]])\n"
],
[
"out = ResizeBatch(4)\nprint(out(a))",
"tensor([[ 1., -1., 1., -1.]])\n"
],
[
"show_doc(Debugger, title_level=3)",
"_____no_output_____"
]
],
[
[
"The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, ouputs and sizes at any point in the network.\n\nFor instance, if you run the following:\n\n``` python\nmodel = nn.Sequential(\n nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n Debugger(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),\n)\n\nmodel.cuda()\n\nlearner = Learner(data, model, metrics=[accuracy])\nlearner.fit(5)\n```\n... you'll see something like this:\n\n```\n/home/ubuntu/fastai/fastai/layers.py(74)forward()\n 72 def forward(self,x:Tensor) -> Tensor:\n 73 set_trace()\n---> 74 return x\n 75 \n 76 class StdUpsample(nn.Module):\n\nipdb>\n```",
"_____no_output_____"
]
],
[
[
"show_doc(PixelShuffle_ICNR, title_level=3)",
"_____no_output_____"
],
[
"show_doc(MergeLayer, title_level=3)",
"_____no_output_____"
],
[
"show_doc(PartialLayer, title_level=3)",
"_____no_output_____"
],
[
"show_doc(SigmoidRange, title_level=3)",
"_____no_output_____"
],
[
"show_doc(SequentialEx, title_level=3)",
"_____no_output_____"
],
[
"show_doc(SelfAttention, title_level=3)",
"_____no_output_____"
],
[
"show_doc(BatchNorm1dFlat, title_level=3)",
"_____no_output_____"
]
],
[
[
"## Loss functions",
"_____no_output_____"
]
],
[
[
"show_doc(FlattenedLoss, title_level=3)",
"_____no_output_____"
]
],
[
[
"Create an instance of `func` with `args` and `kwargs`. When passing an output and target, it\n- puts `axis` first in output and target with a transpose\n- casts the target to `float` is `floatify=True`\n- squeezes the `output` to two dimensions if `is_2d`, otherwise one dimension, squeezes the target to one dimension\n- applied the instance of `func`.",
"_____no_output_____"
]
],
[
[
"show_doc(BCEFlat)",
"_____no_output_____"
],
[
"show_doc(BCEWithLogitsFlat)",
"_____no_output_____"
],
[
"show_doc(CrossEntropyFlat)",
"_____no_output_____"
],
[
"show_doc(MSELossFlat)",
"_____no_output_____"
],
[
"show_doc(NoopLoss)",
"_____no_output_____"
],
[
"show_doc(WassersteinLoss)",
"_____no_output_____"
]
],
[
[
"## Helper functions to create modules",
"_____no_output_____"
]
],
[
[
"show_doc(bn_drop_lin, doc_string=False)",
"_____no_output_____"
]
],
[
[
"The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model. \n\n`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end.",
"_____no_output_____"
]
],
[
[
"show_doc(conv2d)",
"_____no_output_____"
],
[
"show_doc(conv2d_trans)",
"_____no_output_____"
],
[
"show_doc(conv_layer, doc_string=False)",
"_____no_output_____"
]
],
[
[
"The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.\n\n`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LearkyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`.",
"_____no_output_____"
]
],
[
[
"show_doc(embedding, doc_string=False)",
"_____no_output_____"
]
],
[
[
"Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`.",
"_____no_output_____"
]
],
[
[
"show_doc(relu)",
"_____no_output_____"
],
[
"show_doc(res_block)",
"_____no_output_____"
],
[
"show_doc(sigmoid_range)",
"_____no_output_____"
],
[
"show_doc(simple_cnn)",
"_____no_output_____"
]
],
[
[
"## Initialization of modules",
"_____no_output_____"
]
],
[
[
"show_doc(batchnorm_2d)",
"_____no_output_____"
],
[
"show_doc(icnr)",
"_____no_output_____"
],
[
"show_doc(trunc_normal_)",
"_____no_output_____"
],
[
"show_doc(icnr)",
"_____no_output_____"
],
[
"show_doc(NormType)",
"_____no_output_____"
]
],
[
[
"## Undocumented Methods - Methods moved below this line will intentionally be hidden",
"_____no_output_____"
]
],
[
[
"show_doc(Debugger.forward)",
"_____no_output_____"
],
[
"show_doc(Lambda.forward)",
"_____no_output_____"
],
[
"show_doc(AdaptiveConcatPool2d.forward)",
"_____no_output_____"
],
[
"show_doc(NoopLoss.forward)",
"_____no_output_____"
],
[
"show_doc(PixelShuffle_ICNR.forward)",
"_____no_output_____"
],
[
"show_doc(WassersteinLoss.forward)",
"_____no_output_____"
],
[
"show_doc(MergeLayer.forward)",
"_____no_output_____"
],
[
"show_doc(SigmoidRange.forward)",
"_____no_output_____"
],
[
"show_doc(MergeLayer.forward)",
"_____no_output_____"
],
[
"show_doc(SelfAttention.forward)",
"_____no_output_____"
],
[
"show_doc(SequentialEx.forward)",
"_____no_output_____"
],
[
"show_doc(SequentialEx.append)",
"_____no_output_____"
],
[
"show_doc(SequentialEx.extend)",
"_____no_output_____"
],
[
"show_doc(SequentialEx.insert)",
"_____no_output_____"
],
[
"show_doc(PartialLayer.forward)",
"_____no_output_____"
],
[
"show_doc(BatchNorm1dFlat.forward)",
"_____no_output_____"
],
[
"show_doc(Flatten.forward)",
"_____no_output_____"
]
],
[
[
"## New Methods - Please document or move to the undocumented section",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e73336730e8b2d840ce412c7802d6fc37633b016 | 36,729 | ipynb | Jupyter Notebook | docs/tutorials/kb_aggregate/python/kb_aggregating_count_matrices.ipynb | lambdamoses/kallistobustools | 1e4165c0af8e252f8a417f2522e61c30539a7062 | [
"MIT"
] | 67 | 2019-06-13T05:20:31.000Z | 2022-03-25T20:57:33.000Z | docs/tutorials/kb_aggregate/python/kb_aggregating_count_matrices.ipynb | lambdamoses/kallistobustools | 1e4165c0af8e252f8a417f2522e61c30539a7062 | [
"MIT"
] | 31 | 2019-06-18T20:49:36.000Z | 2022-03-23T08:28:20.000Z | docs/tutorials/kb_aggregate/python/kb_aggregating_count_matrices.ipynb | lambdamoses/kallistobustools | 1e4165c0af8e252f8a417f2522e61c30539a7062 | [
"MIT"
] | 21 | 2019-07-02T18:25:26.000Z | 2022-01-27T00:39:18.000Z | 34.65 | 541 | 0.425985 | [
[
[
"<a href=\"https://colab.research.google.com/github/pachterlab/kallistobustools/blob/master/tutorials/docs/tutorials/kb_aggregate/python/kb_aggregating_count_matrices.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Aggregating multiple count matrices tutorial\n\nThis tutorial describes how to aggregate multiple count matrices by concatenating them into a single [AnnData](https://anndata.readthedocs.io/en/latest/anndata.AnnData.html) object with batch labels for different samples.\n\nThis is similar to the Cell Ranger aggr function, however no normalization is performed. cellranger aggr is described at https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/using/aggregate\n\nFor this tutorial we use dataset E-MTAB-6108.",
"_____no_output_____"
],
[
"The notebook will take some time to run. To ensure that Google Colab does not shut down because of inactivity paste the following code into the console of this tab (*Cntrl [Mac: Cmd] + Option + i -> Console tab -> paste code -> press Enter*).\n\n```javascript\nfunction ConnectButton(){\n console.log(\"Connect pushed\"); \n document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click() \n}\nsetInterval(ConnectButton,60000);\n```",
"_____no_output_____"
],
[
"## Download the raw data\n\nThe raw data for E-MTAB-6108 is available at https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-6108/",
"_____no_output_____"
]
],
[
[
"%%time\n!wget -q https://www.ebi.ac.uk/arrayexpress/files/E-MTAB-6108/iPSC_RGCscRNAseq_Sample1_L005_R1.fastq.gz\n!wget -q https://www.ebi.ac.uk/arrayexpress/files/E-MTAB-6108/iPSC_RGCscRNAseq_Sample1_L005_R2.fastq.gz\n!wget -q https://www.ebi.ac.uk/arrayexpress/files/E-MTAB-6108/iPSC_RGCscRNAseq_Sample2_L005_R1.fastq.gz\n!wget -q https://www.ebi.ac.uk/arrayexpress/files/E-MTAB-6108/iPSC_RGCscRNAseq_Sample2_L005_R2.fastq.gz",
"CPU times: user 5.42 s, sys: 839 ms, total: 6.25 s\nWall time: 15min 30s\n"
]
],
[
[
"## Install `kb`\n\nInstall `kb` for running the kallisto|bustools workflow.",
"_____no_output_____"
]
],
[
[
"!pip install --quiet kb-python",
"\u001b[K |████████████████████████████████| 59.1MB 77kB/s \n\u001b[K |████████████████████████████████| 10.3MB 34.3MB/s \n\u001b[K |████████████████████████████████| 13.2MB 50.1MB/s \n\u001b[K |████████████████████████████████| 51kB 5.6MB/s \n\u001b[K |████████████████████████████████| 81kB 6.8MB/s \n\u001b[K |████████████████████████████████| 112kB 56.7MB/s \n\u001b[K |████████████████████████████████| 71kB 6.7MB/s \n\u001b[K |████████████████████████████████| 1.2MB 50.2MB/s \n\u001b[K |████████████████████████████████| 51kB 5.0MB/s \n\u001b[?25h Building wheel for loompy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for sinfo (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for umap-learn (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for numpy-groupies (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for pynndescent (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
]
],
[
[
"## Download a pre-built human index\n\n__Note:__ See [this notebook]() for a tutorial on how to build custom transcriptome or RNA velocity indices.",
"_____no_output_____"
]
],
[
[
"%%time\n!kb ref -d human -i index.idx -g t2g.txt",
"[2021-03-31 20:50:10,750] INFO Downloading files for human from https://caltech.box.com/shared/static/v1nm7lpnqz5syh8dyzdk2zs8bglncfib.gz to tmp/v1nm7lpnqz5syh8dyzdk2zs8bglncfib.gz\n100% 2.23G/2.23G [01:35<00:00, 25.0MB/s]\n[2021-03-31 20:51:47,840] INFO Extracting files from tmp/v1nm7lpnqz5syh8dyzdk2zs8bglncfib.gz\nCPU times: user 1.51 s, sys: 288 ms, total: 1.8 s\nWall time: 2min 15s\n"
]
],
[
[
"## Generate an RNA count matrices in H5AD format\n\nThe following command will generate an RNA count matrix of cells (rows) by genes (columns) in H5AD format, which is a binary format used to store [Anndata](https://anndata.readthedocs.io/en/stable/) objects. Notice we are providing the index and transcript-to-gene mapping we downloaded in the previous step to the `-i` and `-g` arguments respectively. Also, these reads were generated with the 10x Genomics Chromium Single Cell v2 Chemistry, hence the `-x 10xv2` argument. To view other supported technologies, run `kb --list`.\n\nThe `--filter` flag is used to filter out barcodes with low UMI counts. This will generate two matrices, one in the `counts_unfiltered` directory and another in the `counts_filtered` directory.\n\n__Note:__ If you would like a Loom file instead, replace the `--h5ad` flag with `--loom`. If you want to use the raw matrix output by `kb` instead of their H5AD or Loom converted files, omit these flags.",
"_____no_output_____"
],
[
"### Sample 1",
"_____no_output_____"
]
],
[
[
"%%time\n!kb count -i index.idx -g t2g.txt -x 10xv2 -o sample1 --h5ad -t 2 --filter bustools \\\niPSC_RGCscRNAseq_Sample1_L005_R1.fastq.gz \\\niPSC_RGCscRNAseq_Sample1_L005_R2.fastq.gz",
"[2021-03-31 20:52:24,861] INFO Using index index.idx to generate BUS file to sample1 from\n[2021-03-31 20:52:24,861] INFO iPSC_RGCscRNAseq_Sample1_L005_R1.fastq.gz\n[2021-03-31 20:52:24,861] INFO iPSC_RGCscRNAseq_Sample1_L005_R2.fastq.gz\n[2021-03-31 21:15:29,824] INFO Sorting BUS file sample1/output.bus to sample1/tmp/output.s.bus\n[2021-03-31 21:16:21,667] INFO Whitelist not provided\n[2021-03-31 21:16:21,668] INFO Copying pre-packaged 10XV2 whitelist to sample1\n[2021-03-31 21:16:22,484] INFO Inspecting BUS file sample1/tmp/output.s.bus\n[2021-03-31 21:16:36,519] INFO Correcting BUS records in sample1/tmp/output.s.bus to sample1/tmp/output.s.c.bus with whitelist sample1/10xv2_whitelist.txt\n[2021-03-31 21:16:46,573] INFO Sorting BUS file sample1/tmp/output.s.c.bus to sample1/output.unfiltered.bus\n[2021-03-31 21:17:25,239] INFO Generating count matrix sample1/counts_unfiltered/cells_x_genes from BUS file sample1/output.unfiltered.bus\n[2021-03-31 21:17:48,992] INFO Reading matrix sample1/counts_unfiltered/cells_x_genes.mtx\n[2021-03-31 21:18:00,111] INFO Writing matrix to h5ad sample1/counts_unfiltered/adata.h5ad\n[2021-03-31 21:18:00,914] INFO Filtering with bustools\n[2021-03-31 21:18:00,915] INFO Generating whitelist sample1/filter_barcodes.txt from BUS file sample1/output.unfiltered.bus\n[2021-03-31 21:18:01,259] INFO Correcting BUS records in sample1/output.unfiltered.bus to sample1/tmp/output.unfiltered.c.bus with whitelist sample1/filter_barcodes.txt\n[2021-03-31 21:18:09,292] INFO Sorting BUS file sample1/tmp/output.unfiltered.c.bus to sample1/output.filtered.bus\n[2021-03-31 21:18:46,457] INFO Generating count matrix sample1/counts_filtered/cells_x_genes from BUS file sample1/output.filtered.bus\n[2021-03-31 21:19:09,724] INFO Reading matrix sample1/counts_filtered/cells_x_genes.mtx\n[2021-03-31 21:19:18,183] INFO Writing matrix to h5ad sample1/counts_filtered/adata.h5ad\nCPU times: user 9.95 s, sys: 1.39 s, total: 11.3 s\nWall time: 26min 56s\n"
]
],
[
[
"### Sample 2",
"_____no_output_____"
]
],
[
[
"%%time\n!kb count -i index.idx -g t2g.txt -x 10xv2 -o sample2 --h5ad -t 2 --filter bustools \\\niPSC_RGCscRNAseq_Sample2_L005_R1.fastq.gz \\\niPSC_RGCscRNAseq_Sample2_L005_R2.fastq.gz",
"[2021-03-31 21:19:22,185] INFO Using index index.idx to generate BUS file to sample2 from\n[2021-03-31 21:19:22,185] INFO iPSC_RGCscRNAseq_Sample2_L005_R1.fastq.gz\n[2021-03-31 21:19:22,185] INFO iPSC_RGCscRNAseq_Sample2_L005_R2.fastq.gz\n[2021-03-31 21:37:11,095] INFO Sorting BUS file sample2/output.bus to sample2/tmp/output.s.bus\n[2021-03-31 21:37:35,255] INFO Whitelist not provided\n[2021-03-31 21:37:35,255] INFO Copying pre-packaged 10XV2 whitelist to sample2\n[2021-03-31 21:37:35,379] INFO Inspecting BUS file sample2/tmp/output.s.bus\n[2021-03-31 21:37:43,363] INFO Correcting BUS records in sample2/tmp/output.s.bus to sample2/tmp/output.s.c.bus with whitelist sample2/10xv2_whitelist.txt\n[2021-03-31 21:37:47,960] INFO Sorting BUS file sample2/tmp/output.s.c.bus to sample2/output.unfiltered.bus\n[2021-03-31 21:37:58,445] INFO Generating count matrix sample2/counts_unfiltered/cells_x_genes from BUS file sample2/output.unfiltered.bus\n[2021-03-31 21:38:08,901] INFO Reading matrix sample2/counts_unfiltered/cells_x_genes.mtx\n[2021-03-31 21:38:13,045] INFO Writing matrix to h5ad sample2/counts_unfiltered/adata.h5ad\n[2021-03-31 21:38:13,797] INFO Filtering with bustools\n[2021-03-31 21:38:13,798] INFO Generating whitelist sample2/filter_barcodes.txt from BUS file sample2/output.unfiltered.bus\n[2021-03-31 21:38:13,965] INFO Correcting BUS records in sample2/output.unfiltered.bus to sample2/tmp/output.unfiltered.c.bus with whitelist sample2/filter_barcodes.txt\n[2021-03-31 21:38:16,943] INFO Sorting BUS file sample2/tmp/output.unfiltered.c.bus to sample2/output.filtered.bus\n[2021-03-31 21:38:25,772] INFO Generating count matrix sample2/counts_filtered/cells_x_genes from BUS file sample2/output.filtered.bus\n[2021-03-31 21:38:33,900] INFO Reading matrix sample2/counts_filtered/cells_x_genes.mtx\n[2021-03-31 21:38:36,553] INFO Writing matrix to h5ad sample2/counts_filtered/adata.h5ad\nCPU times: user 7.29 s, sys: 1.04 s, total: 8.33 s\nWall time: 19min 17s\n"
]
],
[
[
"# Install `anndata`",
"_____no_output_____"
]
],
[
[
"!pip install --quiet anndata",
"_____no_output_____"
]
],
[
[
"# Read sample1 and sample2 gene counts into anndata",
"_____no_output_____"
]
],
[
[
"import anndata\nsample1 = anndata.read_h5ad('sample1/counts_filtered/adata.h5ad')\nsample2 = anndata.read_h5ad('sample2/counts_filtered/adata.h5ad')",
"_____no_output_____"
],
[
"sample1",
"_____no_output_____"
],
[
"sample1.X",
"_____no_output_____"
],
[
"sample1.obs.head()",
"_____no_output_____"
],
[
"sample1.var.head()",
"_____no_output_____"
],
[
"sample2",
"_____no_output_____"
],
[
"sample2.X",
"_____no_output_____"
],
[
"sample2.obs.head()",
"_____no_output_____"
],
[
"sample2.var.head()",
"_____no_output_____"
]
],
[
[
"## Concatenate the anndatas",
"_____no_output_____"
]
],
[
[
"concat_samples = sample1.concatenate(\n sample2, join='outer', batch_categories=['sample1', 'sample2'], index_unique='-'\n)",
"_____no_output_____"
],
[
"concat_samples",
"_____no_output_____"
],
[
"concat_samples.var.head()",
"_____no_output_____"
],
[
"concat_samples.obs",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e73341c1d83a0669b2f1aec9513fd45e7329e2f2 | 947,869 | ipynb | Jupyter Notebook | results.ipynb | iliciuv/rd | 3e01d06c96919e1777b4e79fff531852e045aa87 | [
"BSD-3-Clause"
] | 1 | 2020-10-16T17:06:37.000Z | 2020-10-16T17:06:37.000Z | results.ipynb | iliciuv/rd | 3e01d06c96919e1777b4e79fff531852e045aa87 | [
"BSD-3-Clause"
] | null | null | null | results.ipynb | iliciuv/rd | 3e01d06c96919e1777b4e79fff531852e045aa87 | [
"BSD-3-Clause"
] | null | null | null | 3,604.064639 | 129,642 | 0.985965 | [
[
[
"###### GDP GROTRWTH R. FORECASTS WITH THIRLWALL LAW ###########\noptions(jupyter.plot_mimetypes = 'image/png')\noptions(warns=-1)\nrelpath <- \"/home/other/Desktop/ECONOMETRICS/RESULTS.112020\"\nsource(paste0(relpath, \"/MODELS/FINALx1.R\"))\ndummy_tot <- break_m\ndummy_mean <- ((26 - dummy_tot) * dum_m) / 26\ndummy_alt <- replicator_dummy(dummy_tot, 26)\nelasticity_x_alt <- replicator(elasticity_x, 26)\nelasticity_m_alt <- replicator(elasticity_m, 26)\nect_x_alt <- ect_x * residuos_x\nect_m_alt <- ect_x * residuos_x\ndum_m_alt <- replicator(dum_m, 26)\nconst_x_alt <- replicator(const_x, 26)\nconst_m_alt <- replicator(const_m, 26)\nelasticity_mreal <- data.frame(diff(LexportsDL)) / data.frame(diff(LincomeDL))\nngrowth_pool <- as.numeric(unlist(diff(LincomeDL)))\nfgrowth <- as.data.frame(diff(LfincomeDL))\nngrowth <- as.data.frame(diff(LincomeDL))\nfgrowth_mean <- growth_rate_log(LfincomeDL, 26)\nngrowth_mean <- growth_rate_log(LincomeDL, 26)\nect_m1 <- ect_m * growth_rate_prom(residuos_m) + trend_m\nect_x1 <- ect_x * growth_rate_prom(residuos_x) + trend_x\n###### THIRLWALL'S LAW FORECASTS###########\ntl_mean1 <- unname((elasticity_x * fgrowth_mean) / elasticity_m)\ntl_mean2 <- unname((elasticity_x * fgrowth_mean + ect_x1 - ect_m1) / elasticity_m)\ntl_mean3 <- unname((elasticity_x * fgrowth_mean + ect_x1 - ect_m1 - dummy_mean) / elasticity_m)\ntl_mean4 <- unname((elasticity_x * fgrowth_mean - dummy_mean) / elasticity_m)\ntl_tot_base <- unname((elasticity_x_alt * fgrowth / elasticity_m_alt))\ntl_tot_int <- unname((elasticity_x_alt * fgrowth + ect_x_alt - ect_m_alt) / elasticity_m_alt)\ntl_tot_int_dum <- unname((elasticity_x_alt * fgrowth + ect_x_alt - ect_m_alt - dum_m_alt * dummy_alt) / elasticity_m_alt)\ntl_tot_dum <- unname((elasticity_x_alt * fgrowth - dum_m_alt * dummy_alt) / elasticity_m_alt)\nfinal_rates_int <- rates_to_levels(tl_tot_int, 11)\nfinal_rates_dum <- rates_to_levels(tl_tot_dum, 11)\nfinal_rates_int_dum <- rates_to_levels(tl_tot_int_dum, 11)\nfinal_rates <- rates_to_levels(tl_tot_base, 11)\ntl_mean5 <- ave_growth_rate(final_rates, 26)\ntl_mean6 <- ave_growth_rate(final_rates_int, 26)\ntl_mean7 <- ave_growth_rate(final_rates_dum, 26)\ntl_mean8 <- ave_growth_rate(final_rates_int_dum, 26)\ntl_mean <- cbind(ngrowth_mean, tl_mean1, tl_mean2, tl_mean3, tl_mean4, tl_mean5, tl_mean6, tl_mean7, tl_mean8)\ntl_pool <- sapply(final_rates_dum, function(i) {\n diff(as.numeric(i))\n})\nelasticity_mcombie <- data.frame(diff(LexportsDL)) / tl_pool\npooled_predictions <- cbind(unlist(as.numeric(tl_pool)), unlist(as.numeric(ngrowth_pool)))\ntl_pool <- unlist(as.numeric(tl_pool))\nswales_test <- summary(lm(ngrowth_pool ~ tl_pool))$coefficients\nswales_model <- as.data.frame(rbind(swales_test, unname(summary(lm(tl_pool ~ ngrowth_pool))$coefficients)))\npar(mfrow = c(2, 2))\nwidget_scatter(y_var = ngrowth_mean, x_var = tl_mean3)\nwidget_scatter(y_var = ngrowth_pool, x_var = tl_pool)\nwidget_scatter_gg(tl_mean, x_var = tl_mean3, y_var = ngrowth_mean)\nwidget_scatter_gg(pooled_predictions, x_var = tl_pool, y_var = ngrowth_pool)\n#### COMPARACIÓN PREDICIONENS VS REALIDAD\nwidget_table(round(tl_mean, 3), 12)\ncomparison <- diff(as.ts(log(final_rates_dum))) - diff(LincomeDL)",
" x z p_x Fx Fx_t m y p_m Fm Fm_t\naut 1 1 1 7.434 0.001 1 2 0 4.102 0.052\nbel 1 1 0 4.904 0.021 1 1 0 3.719 0.080\nfin 1 1 0 6.710 0.003 1 1 0 3.343 0.118\nfra 1 1 0 5.806 0.007 1 1 0 1.028 0.564\ndeu 1 1 2 5.166 0.007 1 2 1 9.961 0.000\ngrc 1 0 0 1.594 0.614 1 1 0 1.818 0.753\nirl 1 1 0 11.379 0.000 1 2 1 1.898 0.350\nita 1 1 0 6.643 0.003 1 2 0 3.620 0.074\nnld 1 1 0 11.324 0.000 1 1 0 4.037 0.056\nprt 1 1 0 10.639 0.001 1 2 0 5.688 0.009\nesp 1 1 0 15.866 0.000 2 2 2 3.776 0.054\n A_x d(finco) diff(rpri) ect_x A_x_t d(finco)_t diff(rpri)_t ect_x_t Fx\naut 0 2.468 0.003 -0.234 0 8.837 0.013 -5.848 7.434\nbel 0 2.091 -0.110 -0.245 0 7.178 -0.937 -4.014 4.904\nfin 0 2.162 -0.026 -0.194 0 4.172 -0.084 -4.695 6.710\nfra 0 1.740 -0.117 -0.170 0 5.181 -0.819 -4.368 5.806\ndeu 0 2.478 -0.098 -0.179 0 6.805 -0.414 -4.910 5.166\ngrc 0 2.671 0.111 -0.254 0 3.627 0.628 -2.708 1.594\nirl 0 0.852 -0.491 -0.137 0 1.133 -2.041 -6.115 11.379\nita 0 2.575 0.181 -0.551 0 9.069 0.770 -4.672 6.643\nnld 0 1.835 -0.162 -0.148 0 6.596 -0.841 -6.100 11.324\nprt 0 1.839 -0.150 -0.021 0 5.228 -0.567 -4.717 10.639\nesp 0 2.007 -0.167 -0.212 0 6.314 -0.868 -7.220 15.866\n Fx_t A_m d(inco) diff(rpri) ect_m dum A_m_t d(inco)_t diff(rpri)_t\naut 0.001 0.000 2.798 -0.025 -0.553 0.000 0.000 13.473 -0.154\nbel 0.021 0.000 2.608 -0.022 -0.581 0.000 0.000 11.788 -0.228\nfin 0.003 0.000 1.850 -0.053 -0.569 0.053 0.000 14.644 -0.339\nfra 0.007 0.000 3.161 0.162 -0.006 0.000 0.000 12.983 1.915\ndeu 0.007 0.000 2.012 0.107 -0.068 0.000 0.000 9.518 0.725\ngrc 0.614 -0.038 1.919 0.227 -0.292 -0.021 -1.562 6.385 1.308\nirl 0.000 0.000 0.778 -0.201 0.018 0.000 0.000 4.906 -1.280\nita 0.003 0.000 3.078 0.084 -0.007 0.005 0.000 13.624 0.496\nnld 0.000 0.000 2.427 0.115 -0.839 0.093 0.000 10.861 0.527\nprt 0.001 0.000 2.674 -0.191 -0.194 0.043 0.000 10.090 -1.042\nesp 0.000 0.000 3.731 -0.145 -0.515 -0.043 0.000 14.003 -0.905\n ect_m_t dum_t Fm Fm_t\naut -3.688 0.000 4.102 0.052\nbel -3.496 0.000 3.719 0.080\nfin -3.321 3.274 3.343 0.118\nfra -1.466 0.000 1.028 0.564\ndeu -5.747 0.000 9.961 0.000\ngrc -2.455 -0.445 1.818 0.753\nirl 2.509 0.000 1.898 0.350\nita -2.760 0.576 3.620 0.074\nnld -3.650 3.570 4.037 0.056\nprt -4.354 4.201 5.688 0.009\nesp -4.257 -3.870 3.776 0.054\n"
],
[
"widget_table(round(coeff_results, 3), 12)\nwidget_scatter_gg(tl_mean, x_var = tl_mean3, y_var = ngrowth_mean)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e7335f4bc63e2eab4dd55147d5bb154f3dc9ced2 | 6,681 | ipynb | Jupyter Notebook | Untitled0.ipynb | AlanKRaju/Business-Analytics-with-python | 1ac6070ea84d20c5ee8c74c2c3c9d67ebdcf8a0f | [
"MIT"
] | null | null | null | Untitled0.ipynb | AlanKRaju/Business-Analytics-with-python | 1ac6070ea84d20c5ee8c74c2c3c9d67ebdcf8a0f | [
"MIT"
] | null | null | null | Untitled0.ipynb | AlanKRaju/Business-Analytics-with-python | 1ac6070ea84d20c5ee8c74c2c3c9d67ebdcf8a0f | [
"MIT"
] | null | null | null | 20.943574 | 58 | 0.382428 | [
[
[
"x,y,z=\"a\",\"b\",\"c\"",
"_____no_output_____"
],
[
"print(x)",
"a\n"
],
[
"print(y)",
"b\n"
],
[
"print(x,y,z)",
"a b c\n"
],
[
"import math",
"_____no_output_____"
],
[
"math.pi",
"_____no_output_____"
],
[
"math.cos(0)",
"_____no_output_____"
],
[
"math.sin(math.pi/2)",
"_____no_output_____"
],
[
"import math as s",
"_____no_output_____"
],
[
"import math as s\nx=float(input(\"En\"))\ny=float(input(\"enter the function\"))\nif y == 1:\n print(s.sin(x))\nif y == 2:\n print(s.cos(x))\nif y == 3: \n print(s.tan(x))\nif y == 4:\n print(s.log(x))",
"En2\nenter the function2\n-0.4161468365471424\n"
],
[
"def Alan():\n x=int(input(\"enter a number\"))\n print(\"sq of x=\",x*x)",
"_____no_output_____"
],
[
"Alan()",
"enter a number5\nsq of x= 25\n"
],
[
"def ayana():\n print(\"sqr of x\",x*x)",
"_____no_output_____"
],
[
"x=int(input(\"enter a number\"))\nayana()",
"enter a number5\nsqr of x 25\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e73361dbf10e24bf6afab07f1374a7e59b249578 | 67,788 | ipynb | Jupyter Notebook | notebooks/Bloom_Timing/SJDF/makePickles201905_SJDF.ipynb | SalishSeaCast/Analysis-Aline | 433e9dba5127d6de51144e91e03ddefe4fb37b91 | [
"Apache-2.0"
] | null | null | null | notebooks/Bloom_Timing/SJDF/makePickles201905_SJDF.ipynb | SalishSeaCast/Analysis-Aline | 433e9dba5127d6de51144e91e03ddefe4fb37b91 | [
"Apache-2.0"
] | null | null | null | notebooks/Bloom_Timing/SJDF/makePickles201905_SJDF.ipynb | SalishSeaCast/Analysis-Aline | 433e9dba5127d6de51144e91e03ddefe4fb37b91 | [
"Apache-2.0"
] | null | null | null | 135.035857 | 45,660 | 0.843527 | [
[
[
"# Making pickle files for bloom timing vs. environmental driver analysis for Juan de Fuca Strait (SJDF)\n### (201905 only)",
"_____no_output_____"
],
[
"To work this notebook, change values in the second code cell and rerun for each year.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport matplotlib as mpl\nimport netCDF4 as nc\nimport datetime as dt\nfrom salishsea_tools import evaltools as et, places, viz_tools, visualisations, bloomdrivers\nimport xarray as xr\nimport pandas as pd\nimport pickle\nimport os\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### To recreate this notebook at a different location, only change the following cell:",
"_____no_output_____"
]
],
[
[
"# Change this to the directory you want the pickle files to be stored:\nsavedir='/ocean/aisabell/MEOPAR/extracted_files'\n# Change 'S3' to the location of interest\nloc='SJDF'\n# To create the time series for a range of years, change iyear to every year within the range\n # and run all cells each time. \niyear=2020\n\n# Leap years in 'dateslist':\nif iyear==2000 or iyear==2004 or iyear==2008 or iyear==2012 or iyear==2016 or iyear==2020: # leap years: 2020,2016,2012,2008 etc\n dateslist=[[dt.datetime(iyear,1,1),dt.datetime(iyear,1,31)],\n [dt.datetime(iyear,1,31),dt.datetime(iyear,2,29)], \n [dt.datetime(iyear,2,29),dt.datetime(iyear,4,1)]]\nelse:\n dateslist=[[dt.datetime(iyear,1,1),dt.datetime(iyear,1,31)],\n [dt.datetime(iyear,1,31),dt.datetime(iyear,2,28)], \n [dt.datetime(iyear,2,28),dt.datetime(iyear,4,1)]] \n\n# What is the start year and end year+1 of the time range of interest?\nstartyear=2007\nendyear=2021 # does NOT include this value\n\n# Note: non-location specific variables only need to be done for each year, not for each location\n# Note: getWindVars in bloomdrivers would need to be changed?",
"_____no_output_____"
],
[
"startjan=dt.datetime(iyear,1,1) # january start date\nendmar=dt.datetime(iyear,4,1) # march end date (does not include this day)\nfraserend=dt.datetime(iyear,3,31) # march end date specifically for Fraser River calculations (includes this day)\nforbloomstart=dt.datetime(iyear,2,15) # long time frame to capture spring bloom date\nforbloomend=dt.datetime(iyear,6,15)\n\nyear=str(iyear)\nmodver='201905'\n\nfname=f'JanToMarch_TimeSeries_{year}_{loc}_{modver}.pkl' # for location specific variables\nfname2=f'JanToMarch_TimeSeries_{year}_{modver}.pkl' # for non-location specific variables\nfname3=f'springBloomTime_{year}_{loc}_{modver}.pkl' # for spring bloom timing calculation\nfname4=f'JanToMarch_Mixing_{year}_{loc}_{modver}.pkl' # for location specific mixing variables\nsavepath=os.path.join(savedir,fname)\nsavepath2=os.path.join(savedir,fname2)\nsavepath3=os.path.join(savedir,fname3)\nsavepath4=os.path.join(savedir,fname4)\nrecalc=False",
"_____no_output_____"
],
[
"# lat and lon information for place:\nlon,lat=places.PLACES[loc]['lon lat']\n# get place information on SalishSeaCast grid:\nij,ii=places.PLACES[loc]['NEMO grid ji']\njw,iw=places.PLACES[loc]['GEM2.5 grid ji']\n\nfig, ax = plt.subplots(1,1,figsize = (6,6))\nwith xr.open_dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/mesh_mask201702.nc') as mesh:\n ax.contour(mesh.nav_lon,mesh.nav_lat,mesh.tmask.isel(t=0,z=0),[0.1,],colors='k')\n tmask=np.array(mesh.tmask)\n gdept_1d=np.array(mesh.gdept_1d)\n e3t_0=np.array(mesh.e3t_0)\nax.plot(lon, lat, '.', markersize=14, color='red')\nax.set_ylim(48,51)\nax.set_xlim(-126,-121)\nax.set_title('Location of Station %s'%loc)\nax.set_xlabel('Longitude')\nax.set_ylabel('Latitude')\nviz_tools.set_aspect(ax,coords='map')",
"_____no_output_____"
]
],
[
[
"### Creating pickles files for location specific variables:",
"_____no_output_____"
]
],
[
[
"if recalc==True or not os.path.isfile(savepath):\n basedir='/results2/SalishSea/nowcast-green.201905/'\n nam_fmt='nowcast'\n flen=1 # files contain 1 day of data each\n ftype= 'ptrc_T' # loads bio files\n tres=24 # 1: hourly resolution; 24: daily resolution \n bio_time=list()\n diat_alld=list()\n no3_alld=list()\n flag_alld=list()\n cili_alld=list()\n microzoo_alld=list()\n mesozoo_alld=list()\n intdiat=list()\n intphyto=list()\n spar=list()\n intmesoz=list()\n intmicroz=list()\n grid_time=list()\n temp=list()\n salinity=list()\n u_wind=list()\n v_wind=list()\n twind=list()\n solar=list()\n ik=0\n for ind, datepair in enumerate(dateslist):\n start=datepair[0]\n end=datepair[1]\n flist=et.index_model_files(start,end,basedir,nam_fmt,flen,ftype,tres)\n flist3 = et.index_model_files(start,end,basedir,nam_fmt,flen,\"grid_T\",tres)\n fliste3t = et.index_model_files(start,end,basedir,nam_fmt,flen,\"carp_T\",tres)\n with xr.open_mfdataset(flist['paths']) as bio:\n bio_time.append(np.array([pd.to_datetime(ii)+dt.timedelta(minutes=30) for ii in bio.time_counter.values]))\n no3_alld.append(np.array(bio.nitrate.isel(y=ij,x=ii))) # 'all_d' = all depths\n diat_alld.append(np.array(bio.diatoms.isel(y=ij,x=ii)))\n flag_alld.append(np.array(bio.flagellates.isel(y=ij,x=ii)))\n cili_alld.append(np.array(bio.ciliates.isel(y=ij,x=ii)))\n microzoo_alld.append(np.array(bio.microzooplankton.isel(y=ij,x=ii)))\n mesozoo_alld.append(np.array(bio.mesozooplankton.isel(y=ij,x=ii)))\n\n with xr.open_mfdataset(fliste3t['paths']) as carp:\n intdiat.append(np.array(np.sum(bio.diatoms.isel(y=ij,x=ii)*carp.e3t.isel(y=ij,x=ii),1))) # 'int' = depth integrated \n intphyto.append(np.array(np.sum((bio.diatoms.isel(y=ij,x=ii)+bio.flagellates.isel(y=ij,x=ii)\\\n +bio.ciliates.isel(y=ij,x=ii))*carp.e3t.isel(y=ij,x=ii),1)))\n spar.append(np.array(carp.PAR.isel(deptht=ik,y=ij,x=ii))) # surface PAR\n intmesoz.append(np.array(np.sum(bio.mesozooplankton.isel(y=ij,x=ii)*carp.e3t.isel(y=ij,x=ii),1)))\n intmicroz.append(np.array(np.sum(bio.microzooplankton.isel(y=ij,x=ii)*carp.e3t.isel(y=ij,x=ii),1)))\n\n with xr.open_mfdataset(flist3['paths']) as grid:\n grid_time.append(np.array([pd.to_datetime(ii)+dt.timedelta(minutes=30) for ii in grid.time_counter.values]))\n temp.append(np.array(grid.votemper.isel(deptht=ik,y=ij,x=ii)) )#surface temperature\n salinity.append(np.array(grid.vosaline.isel(deptht=ik,y=ij,x=ii))) #surface salinity\n \n jW,iW,wopsdir,wnam_fmt=bloomdrivers.getWindVarsYear(iyear,loc)\n if start==dt.datetime(2007,1,1):\n start=dt.datetime(2007,1,3)\n else: \n pass\n \n flist2=et.index_model_files(start,end,wopsdir,wnam_fmt,flen=1,ftype='None',tres=24)\n with xr.open_mfdataset(flist2['paths']) as winds:\n u_wind.append(np.array(winds.u_wind.isel(y=jW,x=iW)))\n v_wind.append(np.array(winds.v_wind.isel(y=jW,x=iW)))\n twind.append(np.array(winds.time_counter))\n solar.append(np.array(winds.solar.isel(y=jW,x=iW)))\n \n bio_time=np.concatenate(bio_time,axis=0)\n diat_alld=np.concatenate(diat_alld,axis=0)\n no3_alld=np.concatenate(no3_alld,axis=0)\n flag_alld=np.concatenate(flag_alld,axis=0)\n cili_alld=np.concatenate(cili_alld,axis=0)\n microzoo_alld=np.concatenate(microzoo_alld,axis=0)\n mesozoo_alld=np.concatenate(mesozoo_alld,axis=0)\n intdiat=np.concatenate(intdiat,axis=0)\n intphyto=np.concatenate(intphyto,axis=0)\n spar=np.concatenate(spar,axis=0)\n intmesoz=np.concatenate(intmesoz,axis=0)\n intmicroz=np.concatenate(intmicroz,axis=0)\n grid_time=np.concatenate(grid_time,axis=0)\n temp=np.concatenate(temp,axis=0)\n salinity=np.concatenate(salinity,axis=0)\n u_wind=np.concatenate(u_wind,axis=0)\n v_wind=np.concatenate(v_wind,axis=0)\n twind=np.concatenate(twind,axis=0)\n solar=np.concatenate(solar,axis=0)\n \n # Calculations based on saved values:\n no3_30to90m=np.sum(no3_alld[:,22:26]*e3t_0[:,22:26,ij,ii],1)/np.sum(e3t_0[:,22:26,ij,ii]) # average, considering cell thickness\n sno3=no3_alld[:,0] # surface nitrate\n sdiat=diat_alld[:,0] # surface diatoms\n sflag=flag_alld[:,0] # surface flagellates\n scili=cili_alld[:,0] # surface ciliates\n intzoop=intmesoz+intmicroz # depth-integrated zooplankton\n fracdiat=intdiat/intphyto # fraction of depth-integrated phytoplankton that is diatoms\n zoop_alld=microzoo_alld+mesozoo_alld # zooplankton at all depths\n sphyto=sdiat+sflag+scili # surface phytoplankton\n phyto_alld=diat_alld+flag_alld+cili_alld # phytoplankton at all depths\n percdiat=sdiat/sphyto # fraction of surface phytoplankton that is diatoms\n\n # wind speed:\n wspeed=np.sqrt(u_wind**2 + v_wind**2)\n # wind direction in degrees from east:\n d = np.arctan2(v_wind, u_wind)\n winddirec=np.rad2deg(d + (d < 0)*2*np.pi)\n \n allvars=(bio_time,diat_alld,no3_alld,flag_alld,cili_alld,microzoo_alld,mesozoo_alld,\n intdiat,intphyto,spar,intmesoz,intmicroz,\n grid_time,temp,salinity,u_wind,v_wind,twind,solar,\n no3_30to90m,sno3,sdiat,sflag,scili,intzoop,fracdiat,zoop_alld,sphyto,phyto_alld,percdiat,\n wspeed,winddirec)\n pickle.dump(allvars,open(savepath,'wb'))\nelse:\n pvars=pickle.load(open(savepath,'rb'))\n (bio_time,diat_alld,no3_alld,flag_alld,cili_alld,microzoo_alld,mesozoo_alld,\n intdiat,intphyto,spar,intmesoz,intmicroz,\n grid_time,temp,salinity,u_wind,v_wind,twind,solar,\n no3_30to90m,sno3,sdiat,sflag,scili,intzoop,fracdiat,zoop_alld,sphyto,phyto_alld,percdiat,\n wspeed,winddirec)=pvars",
"_____no_output_____"
]
],
[
[
"### Creating pickles files for location specific mixing variables:",
"_____no_output_____"
]
],
[
[
"fname4=f'JanToMarch_Mixing_{year}_{loc}_{modver}.pkl' # for location specific mixing variables\nsavepath4=os.path.join(savedir,fname4)\nif recalc==True or not os.path.isfile(savepath4):\n basedir='/results2/SalishSea/nowcast-green.201905/'\n nam_fmt='nowcast'\n flen=1 # files contain 1 day of data each\n tres=24 # 1: hourly resolution; 24: daily resolution \n halocline=list() # daily average depth of halocline\n eddy=list() # daily average eddy diffusivity\n flist=et.index_model_files(startjan,endmar,basedir,nam_fmt,flen,\"grid_T\",tres)\n flist2=et.index_model_files(startjan,endmar,basedir,nam_fmt,flen,\"grid_W\",1)\n \n for filedate in flist['paths']:\n halocline.append(bloomdrivers.halo_de(filedate,ii,ij))\n \n for day in flist2['paths']: # this goes through each day and takes the daily average\n with xr.open_dataset(day) as gridw:\n eddy.append(np.mean(np.array(gridw.vert_eddy_diff.isel(y=ij,x=ii)),axis=0))\n depth=np.array(gridw.depthw)\n \n with xr.open_mfdataset(flist['paths']) as gridt:\n grid_time=np.array([pd.to_datetime(ii)+dt.timedelta(minutes=30) for ii in gridt.time_counter.values])\n temp=np.array(gridt.votemper.isel(y=ij,x=ii)) # all depths temperature\n salinity=np.array(gridt.vosaline.isel(y=ij,x=ii)) # all depths salinity \n \n allvars=(halocline,eddy,depth,grid_time,temp,salinity)\n pickle.dump(allvars,open(savepath4,'wb'))\nelse:\n pvars=pickle.load(open(savepath4,'rb'))\n (halocline,eddy,depth,grid_time,temp,salinity)=pvars",
"_____no_output_____"
]
],
[
[
"### Variables for bloom timing calculations",
"_____no_output_____"
]
],
[
[
"if recalc==True or not os.path.isfile(savepath3):\n basedir='/results2/SalishSea/nowcast-green.201905/'\n nam_fmt='nowcast'\n flen=1 # files contain 1 day of data each\n ftype= 'ptrc_T' # load bio files\n tres=24 # 1: hourly resolution; 24: daily resolution \n flist=et.index_model_files(forbloomstart,forbloomend,basedir,nam_fmt,flen,ftype,tres)\n flist2=et.index_model_files(forbloomstart,forbloomend,basedir,nam_fmt,flen,\"carp_T\",tres)\n\n ik=0\n with xr.open_mfdataset(flist['paths']) as bio:\n bio_time0=np.array([pd.to_datetime(ii)+dt.timedelta(minutes=30) for ii in bio.time_counter.values])\n sno30=np.array(bio.nitrate.isel(deptht=ik,y=ij,x=ii))\n sdiat0=np.array(bio.diatoms.isel(deptht=ik,y=ij,x=ii))\n sflag0=np.array(bio.flagellates.isel(deptht=ik,y=ij,x=ii))\n scili0=np.array(bio.ciliates.isel(deptht=ik,y=ij,x=ii))\n no3_alld0=np.array(bio.nitrate.isel(y=ij,x=ii)) \n diat_alld0=np.array(bio.diatoms.isel(y=ij,x=ii))\n flag_alld0=np.array(bio.flagellates.isel(y=ij,x=ii))\n cili_alld0=np.array(bio.ciliates.isel(y=ij,x=ii))\n with xr.open_mfdataset(flist2['paths']) as carp:\n intdiat0=np.array(np.sum(bio.diatoms.isel(y=ij,x=ii)*carp.e3t.isel(y=ij,x=ii),1)) # depth integrated diatom\n intphyto0=np.array(np.sum((bio.diatoms.isel(y=ij,x=ii)+bio.flagellates.isel(y=ij,x=ii)\\\n +bio.ciliates.isel(y=ij,x=ii))*carp.e3t.isel(y=ij,x=ii),1))\n fracdiat0=intdiat0/intphyto0 # depth integrated fraction of diatoms\n\n sphyto0=sdiat0+sflag0+scili0\n phyto_alld0=diat_alld0+flag_alld0+cili_alld0\n percdiat0=sdiat0/sphyto0 # percent diatoms\n\n pickle.dump((bio_time0,sno30,sdiat0,sflag0,scili0,diat_alld0,no3_alld0,flag_alld0,cili_alld0,phyto_alld0,\\\n intdiat0,intphyto0,fracdiat0,sphyto0,percdiat0),open(savepath3,'wb'))\nelse:\n bio_time0,sno30,sdiat0,sflag0,scili0,diat_alld0,no3_alld0,flag_alld0,cili_alld0,phyto_alld0,\\\n intdiat0,intphyto0,fracdiat0,sphyto0,percdiat0=pickle.load(open(savepath3,'rb'))",
"_____no_output_____"
]
],
[
[
"### Loops that are not location specific (do not need to be redone for each location):",
"_____no_output_____"
]
],
[
[
"# define sog region:\nfig, ax = plt.subplots(1,2,figsize = (6,6))\nwith xr.open_dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:\n bath=np.array(bathy.Bathymetry)\nax[0].contourf(bath,np.arange(0,250,10))\nviz_tools.set_aspect(ax[0],coords='grid')\nsogmask=np.copy(tmask[:,:,:,:])\nsogmask[:,:,740:,:]=0\nsogmask[:,:,700:,170:]=0\nsogmask[:,:,550:,250:]=0\nsogmask[:,:,:,302:]=0\nsogmask[:,:,:400,:]=0\nsogmask[:,:,:,:100]=0\n#sogmask250[bath<250]=0\nax[1].contourf(np.ma.masked_where(sogmask[0,0,:,:]==0,bathy.Bathymetry),[0,100,250,550])",
"_____no_output_____"
],
[
"k250=32 # approximate index for 250 m\nif recalc==True or not os.path.isfile(savepath2):\n\n basedir='/results2/SalishSea/nowcast-green.201905/'\n nam_fmt='nowcast'\n flen=1 # files contain 1 day of data each\n ftype= 'ptrc_T' # load bio files\n tres=24 # 1: hourly resolution; 24: daily resolution \n flist=et.index_model_files(startjan,endmar,basedir,nam_fmt,flen,ftype,tres)\n flist3 = et.index_model_files(startjan,endmar,basedir,nam_fmt,flen,\"grid_T\",tres)\n fliste3t = et.index_model_files(startjan,endmar,basedir,nam_fmt,flen,\"carp_T\",tres)\n\n ik=0\n with xr.open_mfdataset(flist['paths']) as bio:\n no3_past250m=np.array(np.sum(np.sum(np.sum(bio.nitrate.isel(deptht=slice(32,40))*sogmask[:,32:,:,:]*e3t_0[:,32:,:,:],3),2),1)\\\n /np.sum(sogmask[0,32:,:,:]*e3t_0[0,32:,:,:]))\n \n if iyear !=2020: \n # reading Fraser river flow files\n dfFra=pd.read_csv('/ocean/eolson/MEOPAR/obs/ECRivers/Flow/FraserHopeDaily__Feb-8-2021_06_29_29AM.csv',\n skiprows=1)\n # the original file contains both flow and water level information in the same field (Value)\n # keep only the flow data, where PARAM=1 (drop PARAM=2 values, water level data)\n # flow units are m3/s\n # DD is YD, year day (ie. 1 is jan 1)\n dfFra.drop(dfFra.loc[dfFra.PARAM==2].index,inplace=True) \n\n # rename 'Value' column to 'Flow' now that we have removed all the water level rows\n dfFra.rename(columns={'Value':'Flow'}, inplace=True) \n # inplace=True does this function on the orginal dataframe\n\n # no time information so use dt.date\n dfFra['Date']=[dt.date(iyr,1,1)+dt.timedelta(days=idd-1) for iyr, idd in zip(dfFra['YEAR'],dfFra['DD'])]\n # taking the value from the yr column, jan1st date, and making jan1 column to be 1 not 0\n dfFra.head(2)\n\n # select portion of dataframe in desired date range\n dfFra2=dfFra.loc[(dfFra.Date>=startjan.date())&(dfFra.Date<=fraserend.date())]\n riv_time=dfFra2['Date'].values\n rivFlow=dfFra2['Flow'].values\n # could also write dfFra['Date'], sometimes this is required\n # newstart is a datetime object, so we convert it to just a date with .date\n else: \n dfFra=pd.read_csv('/data/dlatorne/SOG-projects/SOG-forcing/ECget/Fraser_flow',sep='\\s+',\n comment='#',names=('Year','Month','Day','Flow'))\n dfFra['Date']=[dt.datetime(int(y),int(m),int(d)) for ind,(y,m,d,f) in dfFra.iterrows()]\n dfFra2=dfFra.loc[(dfFra.Date>=startjan)&(dfFra.Date<=fraserend)]\n riv_time=dfFra2['Date'].values\n rivFlow=dfFra2['Flow'].values\n \n allvars=(no3_past250m,riv_time,rivFlow)\n pickle.dump(allvars,open(savepath2,'wb'))\nelse:\n pvars=pickle.load(open(savepath2,'rb'))\n (no3_past250m,riv_time,rivFlow)=pvars",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"raw",
"raw"
]
] |
e7336c4d6b1d6f98cf1f28dff8f790fa7dd48aa7 | 569,056 | ipynb | Jupyter Notebook | mri_classification/04_heatmap_raw_images.ipynb | CalmScout/MoLAB | 91b5b85e9a1d0f46a7c577937cd699d7c3e271bd | [
"MIT"
] | null | null | null | mri_classification/04_heatmap_raw_images.ipynb | CalmScout/MoLAB | 91b5b85e9a1d0f46a7c577937cd699d7c3e271bd | [
"MIT"
] | null | null | null | mri_classification/04_heatmap_raw_images.ipynb | CalmScout/MoLAB | 91b5b85e9a1d0f46a7c577937cd699d7c3e271bd | [
"MIT"
] | null | null | null | 1,533.843666 | 224,240 | 0.961993 | [
[
[
"## Heatmap for whole slices",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nfrom fastai.vision import *",
"_____no_output_____"
],
[
"bs = 512",
"_____no_output_____"
],
[
"path = Path(\"/storage_1/ds_gbm_vs_met_threshold_whole_50/\")",
"_____no_output_____"
],
[
"tfms = get_transforms(flip_vert=True, do_flip=True, p_affine=0., p_lighting=0., max_zoom=1.)",
"_____no_output_____"
],
[
"src = ImageList.from_folder(path).split_by_folder()",
"_____no_output_____"
],
[
"def get_data(size, bs, padding_mode='reflection'):\n return (src.label_from_folder()\n .transform(tfms, size=size, padding_mode=padding_mode)\n .databunch(bs=bs).normalize(imagenet_stats))",
"_____no_output_____"
],
[
"data = get_data(224, bs, 'zeros')",
"_____no_output_____"
],
[
"def _plot(i,j,ax):\n x,y = data.train_ds[3]\n x.show(ax, y=y)\n\nplot_multi(_plot, 3, 3, figsize=(8,8))",
"_____no_output_____"
],
[
"data = get_data(224, bs)",
"_____no_output_____"
],
[
"plot_multi(_plot, 3, 3, figsize=(8,8))",
"_____no_output_____"
],
[
"# best model so far, with switched off transforms\nlearn = cnn_learner(data, models.resnet50, metrics=error_rate).load('stage-2-off-tfms-resnet50-ep8')",
"_____no_output_____"
],
[
"idx=1\nx,y = data.valid_ds[idx]\nx.show()\ndata.valid_ds.y[idx]",
"_____no_output_____"
],
[
"# len(data.valid_ds)",
"_____no_output_____"
],
[
"# idx_last = -1\n# x_last,y_last = data.valid_ds[idx_last]\n# x_last.show()\n# data.valid_ds.y[idx_last]",
"_____no_output_____"
]
],
[
[
"## Heatmap",
"_____no_output_____"
]
],
[
[
"m = learn.model.eval();",
"_____no_output_____"
],
[
"xb,_ = data.one_item(x)\nxb_im = Image(data.denorm(xb)[0])\nxb = xb.cuda()",
"_____no_output_____"
],
[
"from fastai.callbacks.hooks import *",
"_____no_output_____"
],
[
"def hooked_backward(cat=y):\n with hook_output(m[0]) as hook_a: \n with hook_output(m[0], grad=True) as hook_g:\n preds = m(xb)\n preds[0,int(cat)].backward()\n return hook_a,hook_g",
"_____no_output_____"
],
[
"hook_a,hook_g = hooked_backward()",
"_____no_output_____"
],
[
"acts = hook_a.stored[0].cpu()\nacts.shape",
"_____no_output_____"
],
[
"avg_acts = acts.mean(0)\navg_acts.shape",
"_____no_output_____"
],
[
"def show_heatmap(hm):\n _,ax = plt.subplots()\n xb_im.show(ax)\n ax.imshow(hm, alpha=0.6, extent=(0,224,224,0),\n interpolation='bilinear', cmap='magma');",
"_____no_output_____"
],
[
"show_heatmap(avg_acts)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7336e99708bff513f733282b495414b7203a526 | 181,296 | ipynb | Jupyter Notebook | [Py4DP] [Lecture-2] Graded Assignment.ipynb | gald1017/python-for-DS | 6f25cc38b74b0fb224ad3f7845ac19d3704a265a | [
"MIT"
] | null | null | null | [Py4DP] [Lecture-2] Graded Assignment.ipynb | gald1017/python-for-DS | 6f25cc38b74b0fb224ad3f7845ac19d3704a265a | [
"MIT"
] | null | null | null | [Py4DP] [Lecture-2] Graded Assignment.ipynb | gald1017/python-for-DS | 6f25cc38b74b0fb224ad3f7845ac19d3704a265a | [
"MIT"
] | null | null | null | 202.113712 | 47,372 | 0.907356 | [
[
[
"# Notes\n\nDifferent problems give different number of points: 2, 3 or 4.\n\nPlease, fill `STUDENT` variable with your name, so that we call collect the results automatically. Each problem contains specific validation details. We will do our best to review your assignments, but please keep in mind, that for this assignment automatic grade (between $0$ an $1$) is the primary source of ground truth.",
"_____no_output_____"
]
],
[
[
"%pylab inline\nplt.style.use(\"bmh\")",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"plt.rcParams[\"figure.figsize\"] = (6,6)",
"_____no_output_____"
],
[
"import numpy as np\nimport torch",
"_____no_output_____"
],
[
"STUDENT = \"Gal Dahan Evyatar Shpitzer\"\nASSIGNMENT = 2\nTEST = False",
"_____no_output_____"
],
[
"if TEST:\n import solutions\n total_grade = 0\n MAX_POINTS = 19",
"_____no_output_____"
]
],
[
[
"# NumPy broadcasting",
"_____no_output_____"
],
[
"### 1. Normalize matrix rows (2 points).\n\nFor 2-dimensional array `arr`, calculate an array, in which each row is a normalized version of corresponding row from `arr`.\n\nFor example, for `(3,4)` input array, the output is also `(3,4)` and `out_arr[0] = (arr[0] - np.mean(arr[0])) / np.std(arr[0])` and so on for other rows.\n\nResult must be **2-dimensional**, and **will be tested against three random combinations of input array dimensions ($10 \\leq n < 100 $)**. Array values will be drawn from a normal distribution (`np.random.normal`) with random mean and standard deviation.",
"_____no_output_____"
]
],
[
[
"def norm_rows(arr):\n # your code goes here\n return (arr - np.expand_dims(arr.mean(axis=1), axis=1))/(np.expand_dims(arr.std(axis=1),axis=1))",
"_____no_output_____"
],
[
"PROBLEM_ID = 1\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, norm_rows)",
"Problem 1: Correct\n"
]
],
[
[
"### 2. Normalize matrix columns (2 points).\n\nSimilar to Problem 1, but normalization must be performed along columns.\n\nFor example, for `(3,4)` input array, the output is also `(3,4)` and `out_arr[:, 0] = (arr[:, 0] - np.mean(arr[:, 0])) / np.std(arr[:, 0])` and so on for other columns.\n\nResult must be **2-dimensional**, and **will be tested against three random combinations of input array dimensions ($10 \\leq n < 100 $)**. Array values will be drawn from normal distribution (`np.random.normal`) with random mean and standard deviation.",
"_____no_output_____"
]
],
[
[
"def norm_cols(arr):\n # your code goes here\n return (arr - arr.mean(axis=0).T)/(arr.std(axis=0).T)",
"_____no_output_____"
],
[
"PROBLEM_ID = 2\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, norm_cols)",
"Problem 2: Correct\n"
]
],
[
[
"### 3. Generic normalize routine (2 points).\n\nSimilar to Problems 1 and 2, but normalization must be performed according to `axis` argument. `axis=0` means normalization along the columns, and `axis=1` means normalization along the rows.",
"_____no_output_____"
]
],
[
[
"def norm_cols(arr):\n # your code goes here\n return (arr - arr.mean(axis=0).T)/(arr.std(axis=0).T)\n\ndef norm_rows(arr):\n # your code goes here\n return (arr - np.expand_dims(arr.mean(axis=1), axis=1))/(np.expand_dims(arr.std(axis=1),axis=1))\n\ndef norm(arr, axis):\n # your code goes here\n if axis == 0:\n return norm_cols(arr)\n else:\n return norm_rows(arr)\n",
"_____no_output_____"
],
[
"PROBLEM_ID = 3\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, norm)",
"Problem 3: Correct\n"
]
],
[
[
"### 4. Dot product of matrix and vector (2 points).\n\nCalculate dot product of 2-dimensional array $M$ of shape $(N,K)$ and 1-dimensional row vector $v$ of shape $(K,)$. You cannot use `np.dot` in this exercise.\n\nResult must be **1-dimensional** of shape $(N,)$, and **will be tested against three random combinations of input arrays dimensions ($10 \\leq n < 100 $)**. Arrays values will be drawn from standard normal distribution (`np.random.randn`).",
"_____no_output_____"
]
],
[
[
"def dot(m, v):\n # your code goes here\n return (m*v).sum(axis=1)",
"_____no_output_____"
],
[
"PROBLEM_ID = 4\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, dot)",
"Problem 4: Correct\n"
]
],
[
[
"### 5. Calculate recurrence matrix (3 points).\n\nIn signals (or time series) analysis, it's usualy important to quickly assess the structure (if any) of the data. This can be done in many different ways. You can test, whether a signal is stationary or look at Fourier transform to understand the frequency composition of a signal. When you want to understand, whether signal contains some recurring pattern, it's useful to perform what is called *recurrent quantification analysis*.\n\nImagine a signal $s_i$. Recurrence matrix is then:\n\n$$\nR_{ij} = \\left\\{\n\\begin{array}{l}\n1, |s_i-s_j|<\\varepsilon \\\\\n0, |s_i-s_j|\\ge\\varepsilon \\\\\n\\end{array}\n\\right.\n$$\n\nIn this exercise you need to implement a function, which calculates recurrence matrix for 1-dimensional array. The function should not use any loops and must leverage broadcasting. For reference, naive loop implementation is provided below. Plot recurrence matrices for some signals to understand, how signal structure reveals itself in the recurrence matrix.\n\nFor example, for a signal of shape $(100,)$ result must be of the shape $(100, 100)$. Result must be **2-dimensional**, and **will be tested against three random combinations of input array dimensions ($100 \\leq n < 1000 $)** with different signal patterns (noise, $\\sin$, noise + randomly-placed recurrent pattern).",
"_____no_output_____"
]
],
[
[
"def recm_naive(ts, eps):\n \"\"\"Loop implementation of recurrent matrix.\"\"\"\n\n ln = len(ts)\n\n rm = np.zeros((ln, ln), dtype=bool)\n \n for i in range(ln):\n for j in range(ln):\n rm[i, j] = np.abs(ts[i]-ts[j])<eps\n return rm",
"_____no_output_____"
],
[
"random_signal = np.random.randn(200)\nplt.imshow(recm_naive(random_signal, 1e-1), cmap=plt.cm.binary)\n",
"_____no_output_____"
],
[
"sin_signal = np.sin(np.arange(1000))\nplt.imshow(recm_naive(sin_signal, 1e-1), cmap=plt.cm.binary)",
"_____no_output_____"
],
[
"random_signal = np.random.randn(200)\nrandom_signal[6:21] = 5 * np.ones((15,))\nrandom_signal[93:108] = 5 * np.ones((15,))\n\nrandom_signal[39:54] = 0.5 * np.ones((15,))\nrandom_signal[162:177] = 0.5 * np.ones((15,))\n\nplt.plot(random_signal)\nplt.show()\n\nplt.imshow(recm_naive(random_signal, 5e-1), cmap=plt.cm.binary);",
"_____no_output_____"
],
[
"def recm(ts, eps):\n rm = np.ones((len(ts), len(ts)), dtype=bool)*ts\n return np.abs(rm - ts.reshape(len(rm) ,1) ) < eps\n \n ",
"_____no_output_____"
],
[
"PROBLEM_ID = 5\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, recm)",
"Problem 5: Correct\n"
]
],
[
[
"# PyTorch",
"_____no_output_____"
],
[
"### 6. ReLU activation (2 points).\n\nReLU is the most commonly used activation function in many deep learning application. It's defined as\n\n$$\nReLU(x) = \\max(0, x).\n$$\n\nOutpu must be of the same shape as input, and **will be tested against three random combinations of input array dimensions ($100 \\leq n < 1000 $)**, while values of the input are drawn from standard normal distribution. Number of dimensions of the input will also be selected randomly and is either 1, 2 or 3.",
"_____no_output_____"
]
],
[
[
"def relu(arr):\n arr[arr<0] = 0\n return arr",
"_____no_output_____"
],
[
"PROBLEM_ID = 6\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, relu)",
"Problem 6: Correct\n"
]
],
[
[
"### 7. Mean squared error (2 points).\n\nIn this problem you need to calculate MSE for a pair of tensors `y_true` and `y_pred`. MSE is defined as usual:\n\n$$\nL_{MSE} = \\frac{1}{N} \\sum_i \\left(y_i - \\hat y_i\\right)^2\n$$\n\nNote, however, that `y_true` and `y_pred`may be of **different shape**. While `y_true` is always $(N,)$, `y_pred` may be $(N,1)$, $(1, N)$ or $(N,)$. Input values are drawn from standard normal distribution and **shape is selected randomly ($100 \\leq n < 1000 $)**.",
"_____no_output_____"
]
],
[
[
"def mse(y_true, y_pred):\n # your code goes here\n return (torch.sum((y_true - y_pred.reshape(y_true.shape))**2)) / y_true.shape[0]",
"_____no_output_____"
],
[
"PROBLEM_ID = 7\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, mse)",
"_____no_output_____"
]
],
[
[
"### 8. Character-level encoding (4 points).\n\nIn computations in general and in machine learning specifically letters cannot be used directly, as computers only know aboun numbers. Text data may be encoded in many different ways in natural language processing tasks.\n\nOne of the simplest ways to encode letters is to use one-hot encoded representation, with letters being \"class labels\". A letter is represented by a tensor of shape $(26,)$.\n\nThen, for example, word \"python\" would be transformed into a tensor of shape $(6, 26)$ with all elements being $0$, except $(0, 15)\\sim p,\\,(1, 24)\\sim y,\\,(2, 19)\\sim t,...$ being $1$. A phrase would be represented with 3-dimensional tensor.\n\nIn this problem you need to create a tensor, which represents a list of words `words` of length $N$. The only characters used are those from `string.ascii_lowercase`, and words are of different length $L_i$. Output must be of shape $(N, \\max(L_i), 26)$.\n\nDimension 0 corresponds to words themselves, with `tensor[0]` being a represetation of `words[0]`. Note, that you need to use padding: although trivial in this case, you must remember, that tensor must accomodate for a longest word, thus dimension 1 is $\\max(L_i)$.\n\nNote also, that the only loop you need here is a loop over `words`, there's no need to loop over the resulting tensor.\n\nThe result will be tested against three predefined lists of word, with all words being lowercase and containing only ASCII characters.",
"_____no_output_____"
]
],
[
[
"def word_2_int(w):\n alphabet = 'abcdefghijklmnopqrstuvwxyz'\n char_to_int = dict((c, i) for i, c in enumerate(alphabet))\n \n return torch.as_tensor([char_to_int[char] for char in w])\n\ndef onehot(labels, tensor_size):\n # your code goes here\n b = torch.zeros(( tensor_size, 26 ))\n b[torch.arange(len(labels)), labels] = 1\n return b\n\n \n\ndef encode(words):\n# your code goes here\n max_length = max([len(e) for e in words])\n m = torch.zeros(len(words), max_length, 26)\n for i,w in zip(range(len(words)), words):\n m[i,:,:] = onehot(word_2_int(w), max_length)\n\n return m\n ",
"_____no_output_____"
],
[
"PROBLEM_ID = 8\n\nif TEST:\n total_grade += solutions.check(STUDENT, PROBLEM_ID, encode)",
"Problem 8: Correct\n"
]
],
[
[
"# Your grade",
"_____no_output_____"
]
],
[
[
"if TEST:\n print(f\"{STUDENT}: {int(100 * total_grade / MAX_POINTS)}\")",
"Gal Dahan Evyatar Shpitzer: 0\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7337929f1c8c1bbb7d7763ba4722815d7809bff | 146,862 | ipynb | Jupyter Notebook | Model-based-CEM-policy.ipynb | mgb45/OC-notebooks | 67b1899d1fb3455ab3caab58f94429b9f432164b | [
"MIT"
] | 1 | 2021-05-03T14:47:27.000Z | 2021-05-03T14:47:27.000Z | Model-based-CEM-policy.ipynb | mgb45/OC-notebooks | 67b1899d1fb3455ab3caab58f94429b9f432164b | [
"MIT"
] | null | null | null | Model-based-CEM-policy.ipynb | mgb45/OC-notebooks | 67b1899d1fb3455ab3caab58f94429b9f432164b | [
"MIT"
] | null | null | null | 362.622222 | 41,288 | 0.930642 | [
[
[
"import numpy as np\nfrom matplotlib import pyplot as plt\nfrom IPython import display\n\nimport torch\nfrom torch.utils.data import Dataset, DataLoader\n\nfrom data import H5Dataset\nfrom models import FCN, Encoder\nfrom torch_pendulum import Pendulum",
"_____no_output_____"
]
],
[
[
"Given running cost $g(x_t,u_t)$ and terminal cost $h(x_T)$ the finite horizon $(t=0 \\ldots T)$ optimal control problem seeks to find the optimal control, \n$$u^*_{1:T} = \\text{argmin}_{u_{1:T}} L(x_{1:T},u_{1:T})$$ \n$$u^*_{1:T} = \\text{argmin}_{u_{1:T}} h(x_T) + \\sum_{t=0}^T g(x_t,u_t)$$\nsubject to the dynamics constraint: $x_{t+1} = f(x_t,u_t)$.\n\nThis notebook provides a dirty, brute forcing solution to problems of this form, using the inverted pendulum as an example, and assuming dynamics are not know a-priori. First, we gather state, actions, next state pairs, and use these to train a surrogate neural network dynamics model, $x_{t+1} \\sim \\hat{f}(x_t,u_t)$, approximating the true dynamics $f$.\n\nWe'll then set up a sampling-based optimiser (CEM) to train a policy $u^*_t \\sim p(x_t)$ by rolling out using the surrogate dynamics $\\hat{f}$, evaluating the cost. We'll do this in a continuous control setting, but again no stability guarantees. Miguel has a great description of CEM: https://jaques.xyz/cem-and-posterior/",
"_____no_output_____"
]
],
[
[
"# NN parameters\nNsamples = 10000\nepochs = 500\n\nlatent_dim = 1024\nbatch_size = 8\nlr = 3e-4\n\n# Torch environment wrapping gym pendulum\ntorch_env = Pendulum()\n\n# Test parameters\nNsteps = 100",
"_____no_output_____"
],
[
"# Set up model (fully connected neural network)\n\nmodel = FCN(latent_dim=latent_dim,d=torch_env.d,ud=torch_env.ud)\noptimizer = torch.optim.Adam(model.parameters(), lr=lr)",
"_____no_output_____"
],
[
"# Load previously trained model\nmodel.load_state_dict(torch.load('./fcn.npy'))",
"_____no_output_____"
],
[
"# Or gather some training data\nstates_, actions, states = torch_env.get_data(Nsamples)\n\ndset = H5Dataset(np.array(states_),np.array(actions),np.array(states))\nsampler = DataLoader(dset, batch_size=batch_size, shuffle=True)",
"_____no_output_____"
],
[
"# and train model\n\nlosses = []\nfor epoch in range(epochs):\n \n batch_losses = []\n for states_,actions,states in sampler:\n \n recon_x = model(states_,actions)\n loss = model.loss_fn(recon_x,states)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n batch_losses.append(loss.item())\n \n losses.append(np.mean(batch_losses))\n plt.cla()\n plt.semilogy(losses)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())",
"_____no_output_____"
],
[
"torch.save(model.state_dict(),'./fcn.npy')",
"_____no_output_____"
],
[
"# Test model rollouts - looks reasonable\n\nstates = []\n_states = []\n\ns = torch_env.env.reset()\nstates.append(s)\n_states.append(s.copy())\nfor i in range(30):\n a = torch_env.env.action_space.sample()\n s,r,_,_ = torch_env.env.step(a) # take a random action\n states.append(s)\n \n # roll-out with model\n _s = model(torch.from_numpy(_states[-1]).float().reshape(1,-1),torch.from_numpy(a).float().reshape(1,-1))\n _states.append(_s.detach().numpy())\n \n plt.cla()\n plt.plot(np.array(states),'--')\n plt.plot(np.vstack(_states))\n \n display.clear_output(wait=True)\n display.display(plt.gcf())",
"_____no_output_____"
],
[
"# Set up policy\npolicy = Encoder(latent_dim=latent_dim,hidden_dim=torch_env.ud)\n\n# Load previously trained model\n# policy.load_state_dict(torch.load('./CEM_policy.npy'))",
"_____no_output_____"
],
[
"class CEMControl:\n \n def __init__(self,dynamics,running_cost,term_cost,policy,u_dim=1,umax=2,frac=5,horizon=10,lr=1e-4):\n \n self.dynamics = dynamics\n self.term_cost = term_cost\n self.running_cost = running_cost\n self.horizon = horizon\n self.u_dim = u_dim\n self.umax = umax\n self.lr = lr\n self.frac = frac\n self.policy = policy\n self.optimizer = torch.optim.Adam(self.policy.parameters(), lr=self.lr)\n \n\n def cost(self,x):\n cost = []\n states = [x]\n controls = []\n with torch.no_grad():\n for j in range(self.horizon-1):\n mu_u = self.policy(states[-1])\n u = mu_u + torch.randn(1,1)\n states.append(self.dynamics(states[-1].reshape(1,-1),self.umax*torch.tanh(u).reshape(1,-1)))\n cost.append(self.running_cost(states[-1].reshape(1,-1),self.umax*torch.tanh(u).reshape(1,-1)))\n controls.append(u)\n\n return torch.sum(torch.stack(cost))+self.term_cost(states[-1],u[:,-1]), controls\n \n def minimize(self,xin,Ns=10):\n \n # Sample a bunch of trials using dynamics and policy\n losses = []\n controls = []\n for i in range(Ns):\n loss, u = self.cost(xin)\n \n losses.append(loss)\n controls.append(u[0])\n \n # Update policy\n \n mu_u = self.policy(xin) # Current action\n \n \n # Weight actions\n weights = -torch.stack(losses).reshape(-1,)\n weights = weights-torch.min(weights)\n weights = weights/torch.sum(weights)\n \n # Get top self.frac% actions and compute new best action\n idxs = torch.argsort(weights,dim=-1,descending=True)\n \n mu_new = torch.mean(torch.stack([controls[idx] for idx in idxs[:int(Ns/self.frac)]]).reshape(-1,self.u_dim))\n \n # Step in the direction of this better action\n loss = (mu_u-mu_new)**2\n \n self.optimizer.zero_grad()\n loss.backward()\n self.optimizer.step()\n \n return self.umax*torch.tanh(mu_new), loss.item()\n",
"_____no_output_____"
],
[
"policy_learner = CEMControl(model.dynamics, torch_env.running_cost, torch_env.term_cost, policy,u_dim=1,umax=2,horizon=10,lr=1e-4)",
"_____no_output_____"
],
[
"# Train controller\nplt.figure(figsize=(15,5))\ns = torch_env.env.reset()\nlosses = []\nfor i in range(10000):\n \n s = torch_env.env.observation_space.sample()\n u,loss = policy_learner.minimize(torch.from_numpy(s).reshape(1,-1).float(),Ns=50) #OC\n \n s,r,_,_ = torch_env.env.step([u.detach().numpy()]) # take a random action\n losses.append(loss)\n \n # Test policy every 100 steps\n if (i%100==0):\n s = torch_env.env.reset()\n \n for k in range(150):\n \n mu_u = policy(torch.from_numpy(s).reshape(1,-1).float())\n s,r,_,_ = torch_env.env.step(2*torch.tanh(mu_u).detach().numpy()) # take a random action\n torch_env.env.render()\n \n plt.clf()\n plt.semilogy(losses)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())",
"_____no_output_____"
],
[
"torch.save(policy.state_dict(),'./CEM_policy.npy')",
"_____no_output_____"
],
[
"# Test policy\ns = torch_env.env.reset()\nfor k in range(200):\n\n mu_u = policy(torch.from_numpy(s).reshape(1,-1).float())\n s,r,_,_ = torch_env.env.step(2*torch.tanh(mu_u).detach().numpy()) # take a random action\n# print(mu_u)\n torch_env.env.render()",
"_____no_output_____"
],
[
"torch_env.env.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7338678156bf2f034d1708b04f1a9d06a1bd013 | 97,055 | ipynb | Jupyter Notebook | notes/01-intro.ipynb | TolaAbiodun/2020-Pandas-tutorial_notes | dcea061fd0117fb8b56fb2470ecdef9d0b40f734 | [
"MIT"
] | 1 | 2021-11-20T08:48:10.000Z | 2021-11-20T08:48:10.000Z | notes/01-intro.ipynb | TolaAbiodun/2020-Pandas-tutorial_notes | dcea061fd0117fb8b56fb2470ecdef9d0b40f734 | [
"MIT"
] | null | null | null | notes/01-intro.ipynb | TolaAbiodun/2020-Pandas-tutorial_notes | dcea061fd0117fb8b56fb2470ecdef9d0b40f734 | [
"MIT"
] | 3 | 2021-05-05T18:01:55.000Z | 2021-11-07T09:24:57.000Z | 32.777778 | 1,995 | 0.40409 | [
[
[
"import pandas",
"_____no_output_____"
],
[
"pandas.__version__",
"_____no_output_____"
],
[
"pandas.read_csv('.../data/gapminder.tsv', sep='\\t')",
"_____no_output_____"
],
[
"df = pandas.read_csv('../data/gapminder.tsv', sep='\\t')",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"# from pandas import * ## Do not do this",
"_____no_output_____"
],
[
"df = pd.read_csv('../data/gapminder.tsv', sep='\\t')",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"type(df)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1704 entries, 0 to 1703\nData columns (total 6 columns):\ncountry 1704 non-null object\ncontinent 1704 non-null object\nyear 1704 non-null int64\nlifeExp 1704 non-null float64\npop 1704 non-null int64\ngdpPercap 1704 non-null float64\ndtypes: float64(2), int64(2), object(2)\nmemory usage: 80.0+ KB\n"
],
[
"df.shape()",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.index",
"_____no_output_____"
],
[
"df.values",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"country = df['country']",
"_____no_output_____"
],
[
"type(country)",
"_____no_output_____"
],
[
"country = df[['country']]",
"_____no_output_____"
],
[
"type(country)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"del df['country']",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df = pd.read_csv('../data/gapminder.tsv', sep='\\t')",
"_____no_output_____"
],
[
"df = df.drop(['continent', 'country'], axis='columns')",
"_____no_output_____"
],
[
"df = pd.read_csv('../data/gapminder.tsv', sep='\\t')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.loc[0]",
"_____no_output_____"
],
[
"df.loc[-1]",
"_____no_output_____"
],
[
"df.loc[[0, 1]]",
"_____no_output_____"
],
[
"df.iloc[0]",
"_____no_output_____"
],
[
"df.iloc[-1]",
"_____no_output_____"
],
[
"subset = df.loc[:, ['year', 'pop']]",
"_____no_output_____"
],
[
"subset.head()",
"_____no_output_____"
],
[
"subset = df.iloc[:, [2, 4]]",
"_____no_output_____"
],
[
"subset.head()",
"_____no_output_____"
],
[
"df.loc[df['country'] == 'United States']",
"_____no_output_____"
],
[
"df['country'] == 'United States'",
"_____no_output_____"
],
[
"df.loc[(df['country'] == 'United States') & (df['year'] == 1982)]",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.groupby('year')['lifeExp'].mean()",
"_____no_output_____"
],
[
"import numpy as np\ndf.groupby('year')['lifeExp'].agg(np.mean)",
"_____no_output_____"
],
[
"df.groupby(['year', 'continent'])[['lifeExp', 'gdpPercap']].agg(np.mean).reset_index()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7338a89b206d8f40bee44920605922c7fff9179 | 63,891 | ipynb | Jupyter Notebook | nbs/060_callback.core.ipynb | Attol8/timeseriesAI | 50f8767e26eaee36f444388ea083866c17dbce19 | [
"Apache-2.0"
] | null | null | null | nbs/060_callback.core.ipynb | Attol8/timeseriesAI | 50f8767e26eaee36f444388ea083866c17dbce19 | [
"Apache-2.0"
] | null | null | null | nbs/060_callback.core.ipynb | Attol8/timeseriesAI | 50f8767e26eaee36f444388ea083866c17dbce19 | [
"Apache-2.0"
] | null | null | null | 71.707071 | 14,992 | 0.73807 | [
[
[
"# default_exp callback.core",
"_____no_output_____"
]
],
[
[
"# Callback\n\n> Miscellaneous callbacks for timeseriesAI.",
"_____no_output_____"
]
],
[
[
"#export \nfrom tsai.imports import *\nfrom tsai.utils import *\nfrom tsai.data.preprocessing import *\nfrom tsai.data.transforms import *\nfrom tsai.models.layers import *\nfrom fastai.callback.all import *",
"_____no_output_____"
],
[
"#export\nimport torch.multiprocessing\ntorch.multiprocessing.set_sharing_strategy('file_system')",
"_____no_output_____"
]
],
[
[
"## Events",
"_____no_output_____"
],
[
"A callback can implement actions on the following events:\n* before_fit: called before doing anything, ideal for initial setup.\n* before_epoch: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.\n* before_train: called at the beginning of the training part of an epoch.\n* before_batch: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).\n* after_pred: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.\n* after_loss: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).\n* before_backward: called after the loss has been computed, but only in training mode (i.e. when the backward pass will be used)\n* after_backward: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).\n* after_step: called after the step and before the gradients are zeroed.\n* after_batch: called at the end of a batch, for any clean-up before the next one.\n* after_train: called at the end of the training phase of an epoch.\n* before_validate: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.\n* after_validate: called at the end of the validation part of an epoch.\n* after_epoch: called at the end of an epoch, for any clean-up before the next one.\n* after_fit: called at the end of training, for final clean-up.",
"_____no_output_____"
],
[
"## Learner attributes",
"_____no_output_____"
],
[
"When writing a callback, the following attributes of Learner are available:\n\n* **model**: the model used for training/validation\n* **data**: the underlying DataLoaders\n* **loss_func**: the loss function used\n* **opt**: the optimizer used to udpate the model parameters\n* **opt_func**: the function used to create the optimizer\n* **cbs**: the list containing all Callbacks\n* **dl**: current DataLoader used for iteration\n* **x/xb**: last input drawn from self.dl (potentially modified by callbacks). xb is always a tuple (potentially with one element) and x is detuplified. You can only assign to xb.\n* **y/yb**: last target drawn from self.dl (potentially modified by callbacks). yb is always a tuple (potentially with one element) and y is detuplified. You can only assign to yb.\n* **pred**: last predictions from self.model (potentially modified by callbacks)\n* **loss**: last computed loss (potentially modified by callbacks)\n* **n_epoch**: the number of epochs in this training\n* **n_iter**: the number of iterations in the current self.dl\n* **epoch**: the current epoch index (from 0 to n_epoch-1)\n* **iter**: the current iteration index in self.dl (from 0 to n_iter-1)\n\nThe following attributes are added by TrainEvalCallback and should be available unless you went out of your way to remove that callback:\n* **train_iter**: the number of training iterations done since the beginning of this training\n* **pct_train**: from 0. to 1., the percentage of training iterations completed\n* **training**: flag to indicate if we're in training mode or not\n\nThe following attribute is added by Recorder and should be available unless you went out of your way to remove that callback:\n* **smooth_loss**: an exponentially-averaged version of the training loss",
"_____no_output_____"
],
[
"## Gambler's loss: noisy labels",
"_____no_output_____"
]
],
[
[
"#export\nclass GamblersCallback(Callback):\n \"A callback to use metrics with gambler's loss\"\n def after_loss(self): self.learn.pred = self.learn.pred[..., :-1]",
"_____no_output_____"
],
[
"from tsai.data.all import *\nfrom tsai.models.InceptionTime import *\nfrom tsai.models.layers import *\ndsid = 'NATOPS'\nX, y, splits = get_UCR_data(dsid, return_split=False)\ntfms = [None, Categorize()]\ndsets = TSDatasets(X, y, tfms=tfms, splits=splits)\ndls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=[64, 128])\nloss_func = gambler_loss()\nlearn = Learner(dls, InceptionTime(dls.vars, dls.c + 1), loss_func=loss_func, cbs=GamblersCallback, metrics=[accuracy])\nlearn.fit_one_cycle(1)",
"_____no_output_____"
]
],
[
[
"## Transform scheduler",
"_____no_output_____"
]
],
[
[
"# export\nclass TransformScheduler(Callback):\n \"A callback to schedule batch transforms during training based on a function (sched_lin, sched_exp, sched_cos (default), etc)\"\n def __init__(self, schedule_func:callable, show_plot:bool=False): \n self.schedule_func,self.show_plot = schedule_func,show_plot\n self.mult = []\n\n def before_fit(self):\n for pct in np.linspace(0, 1, len(self.dls.train) * self.n_epoch): self.mult.append(self.schedule_func(pct))\n # get initial magnitude values and update initial value\n self.mag = []\n self.mag_tfms = []\n for t in self.dls.after_batch: \n if hasattr(t, 'magnitude'):\n self.mag.append(t.magnitude)\n t.magnitude *= self.mult[0]\n self.mag_tfms.append(t)\n\n def after_batch(self):\n if self.training and len(self.mag_tfms)>0 and self.train_iter < len(self.mult):\n # set values for next batch\n for t,m in zip(self.mag_tfms, self.mag): \n t.magnitude = m * self.mult[self.train_iter]\n \n def after_fit(self):\n if self.show_plot and self.mult != [] and len(self.mag_tfms)>0: \n print()\n plt.plot(self.mult)\n plt.title('Scheduled tfms')\n plt.show()\n print()\n self.show_plot = False\n # set values to initial values\n for t,m in zip(self.mag_tfms, self.mag): t.magnitude = m\n \n def __repr__(self):\n return f'{self.__class__.__name__}({self.schedule_func})'",
"_____no_output_____"
],
[
"TransformScheduler(SchedCos(1, 0))",
"_____no_output_____"
],
[
"p = torch.linspace(0.,1,100)\nf = combine_scheds([0.3, 0.4, 0.3], [SchedLin(1.,1.), SchedCos(1.,0.), SchedLin(0.,.0), ])\nplt.plot(p, [f(o) for o in p]);",
"_____no_output_____"
],
[
"p = torch.linspace(0.,1,100)\nf = combine_scheds([0.3, 0.7], [SchedCos(0.,1.), SchedCos(1.,0.)])\nplt.plot(p, [f(o) for o in p]);",
"_____no_output_____"
]
],
[
[
"## ShowGraph",
"_____no_output_____"
]
],
[
[
"#export\nclass ShowGraph(Callback):\n \"(Modified) Update a graph of training and validation loss\"\n order,run_valid=65,False\n names = ['train', 'valid']\n def __init__(self, plot_metrics:bool=True, final_losses:bool=False):\n store_attr(\"plot_metrics,final_losses\")\n\n\n def before_fit(self):\n self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, \"gather_preds\")\n if not(self.run): return\n self.nb_batches = []\n\n def after_train(self): self.nb_batches.append(self.train_iter)\n\n def after_epoch(self):\n \"Plot validation loss in the pbar graph\"\n if not self.nb_batches: return\n rec = self.learn.recorder\n iters = range_of(rec.losses)\n val_losses = [v[1] for v in rec.values]\n x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses))\n y_min = min((min(rec.losses), min(val_losses)))\n y_max = max((max(rec.losses), max(val_losses)))\n margin = (y_max - y_min) * .05\n y_bounds = (y_min - margin, y_max + margin)\n self.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds)\n\n def after_fit(self):\n plt.close(self.graph_ax.figure)\n if self.plot_metrics: self.learn.plot_metrics(final_losses=self.final_losses)\n\n def update_graph(self, graphs, x_bounds=None, y_bounds=None, figsize=(6,4)):\n if not hasattr(self, 'graph_fig'):\n self.graph_fig, self.graph_ax = plt.subplots(1, figsize=figsize)\n self.graph_out = display(self.graph_ax.figure, display_id=True)\n self.graph_ax.clear()\n if len(self.names) < len(graphs): self.names += [''] * (len(graphs) - len(self.names))\n for g,n in zip(graphs,self.names): self.graph_ax.plot(*g, label=n)\n self.graph_ax.legend(loc='upper right')\n self.graph_ax.grid(color='gainsboro', linewidth=.5)\n if x_bounds is not None: self.graph_ax.set_xlim(*x_bounds)\n if y_bounds is not None: self.graph_ax.set_ylim(*y_bounds)\n self.graph_ax.set_title(f'Losses\\nepoch: {self.epoch +1}/{self.n_epoch}')\n self.graph_out.update(self.graph_ax.figure)\n \nShowGraphCallback2 = ShowGraph",
"_____no_output_____"
]
],
[
[
"## Uncertainty-based data augmentation",
"_____no_output_____"
]
],
[
[
"#export\nclass UBDAug(Callback):\n r\"\"\"A callback to implement the uncertainty-based data augmentation.\"\"\"\n \n def __init__(self, batch_tfms:list, N:int=2, C:int=4, S:int=1): \n r'''\n Args:\n batch_tfms: list of available transforms applied to the combined batch. They will be applied in addition to the dl tfms.\n N: # composition steps (# transforms randomly applied to each sample)\n C: # augmented data per input data (# times N transforms are applied)\n S: # selected data points used for training (# augmented samples in the final batch from each original sample)\n '''\n \n self.C, self.S = C, min(S, C)\n self.batch_tfms = L(batch_tfms)\n self.n_tfms = len(self.batch_tfms)\n self.N = min(N, self.n_tfms)\n \n def before_fit(self):\n assert hasattr(self.loss_func, 'reduction'), \"You need to pass a loss_function with a 'reduction' attribute\"\n self.red = self.loss_func.reduction\n \n def before_batch(self):\n if self.training:\n with torch.no_grad():\n setattr(self.loss_func, 'reduction', 'none')\n for i in range(self.C):\n idxs = np.random.choice(self.n_tfms, self.N, False)\n x_tfm = compose_tfms(self.x, self.batch_tfms[idxs], split_idx=0)\n loss = self.loss_func(self.learn.model(x_tfm), self.y).reshape(-1,1)\n if i == 0:\n x2 = x_tfm.unsqueeze(1)\n max_loss = loss\n else: \n losses = torch.cat((max_loss, loss), dim=1)\n x2 = torch.cat((x2, x_tfm.unsqueeze(1)), dim=1)\n x2 = x2[np.arange(x2.shape[0]).reshape(-1,1), losses.argsort(1)[:, -self.S:]]\n max_loss = losses.max(1)[0].reshape(-1,1)\n setattr(self.loss_func, 'reduction', self.red)\n x2 = x2.reshape(-1, self.x.shape[-2], self.x.shape[-1])\n if self.S > 1: self.learn.yb = (torch_tile(self.y, 2),)\n self.learn.xb = (x2,)\n\n def __repr__(self): return f'UBDAug({[get_tfm_name(t) for t in self.batch_tfms]})'",
"_____no_output_____"
],
[
"from tsai.data.all import *\nfrom tsai.models.all import *\ndsid = 'NATOPS'\nX, y, splits = get_UCR_data(dsid, return_split=False)\ntfms = [None, Categorize()]\ndsets = TSDatasets(X, y, tfms=tfms, splits=splits)\ndls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=[TSStandardize()])\nmodel = create_model(InceptionTime, dls=dls)\nTS_tfms = [TSMagScale(.75, p=.5), TSMagWarp(.1, p=0.5), TSWindowWarp(.25, p=.5), \n TSSmooth(p=0.5), TSRandomResizedCrop(.1, p=.5), \n TSRandomCropPad(.3, p=0.5), \n TSMagAddNoise(.5, p=.5)]\n\nubda_cb = UBDAug(TS_tfms, N=2, C=4, S=2)\nlearn = Learner(dls, model, cbs=ubda_cb, metrics=accuracy)\nlearn.fit_one_cycle(1)",
"_____no_output_____"
],
[
"# export\n\nclass WeightedPerSampleLoss(Callback):\n order = 65\n\n def __init__(self, instance_weights):\n store_attr()\n\n def before_fit(self):\n self.old_loss = self.learn.loss_func\n self.reduction = getattr(self.learn.loss_func, 'reduction', None)\n self.learn.loss_func = _PerInstanceLoss(crit=self.learn.loss_func)\n assert len(self.instance_weights) == len(self.learn.dls.train.dataset) + len(self.learn.dls.valid.dataset)\n self.instance_weights = torch.as_tensor(self.instance_weights, device=self.learn.dls.device)\n\n def before_batch(self):\n input_idxs = self.learn.dls.train.input_idxs if self.training else self.learn.dls.valid.input_idxs\n self.learn.loss_func.weights = self.instance_weights[input_idxs]\n\n def after_fit(self):\n self.learn.loss_func = self.old_loss\n if self.reduction is not None: self.learn.loss_func.reduction = self.reduction\n\n\nclass _PerInstanceLoss(Module):\n def __init__(self, crit):\n self.crit = crit\n self.crit.reduction = 'none'\n self.weights = None\n\n def forward(self, input, target):\n return (self.crit(input, target) * self.weights / self.weights.sum()).sum()",
"_____no_output_____"
],
[
"# export\n\nclass BatchSubsampler(Callback):\n \"\"\" Callback that selects a percentage of samples and/ or sequence steps with replacement from each training batch\n\n Args:\n ====\n\n sample_pct: percentage of random samples (or instances) that will be drawn. If 1. the output batch will contain the same number of samples \n as the input batch.\n step_pct: percentage of random sequence steps that will be drawn. If 1. the output batch will contain the same number of sequence steps \n as the input batch. If used with models that don't use a pooling layer, this must be set to 1 to keep the same dimensions. \n With CNNs, this value may be different.\n same_seq_len: If True, it ensures that the output has the same shape as the input, even if the step_pct chosen is < 1. Defaults to True.\n\n \"\"\"\n\n def __init__(self, sample_pct:Optional[float]=None, step_pct:Optional[float]=None, same_seq_len:bool=True):\n store_attr()\n\n def before_fit(self):\n self.run = not hasattr(self, \"gather_preds\")\n if not(self.run): return\n\n def before_batch(self):\n if not self.training: return\n\n if self.sample_pct is not None:\n B = self.x.shape[0]\n if isinstance(self.sample_pct, tuple):\n sample_pct = np.random.rand() * (self.sample_pct[1] - self.sample_pct[0]) + self.sample_pct[0]\n else: \n sample_pct = self.sample_pct\n idxs = np.random.choice(B, round(B * sample_pct), True)\n self.learn.xb = tuple(xbi[idxs] for xbi in self.learn.xb)\n self.learn.yb = tuple(ybi[idxs] for ybi in self.learn.yb)\n\n if self.step_pct is not None:\n S = self.x.shape[-1]\n if isinstance(self.step_pct, tuple):\n step_pct = np.random.rand() * (self.step_pct[1] - self.step_pct[0]) + self.step_pct[0]\n else: \n step_pct = self.step_pct\n if self.step_pct != 1 and self.same_seq_len: \n idxs = np.sort(np.tile(np.random.choice(S, round(S * step_pct), True), math.ceil(1 / step_pct))[:S])\n else:\n idxs = np.sort(np.random.choice(S, round(S * step_pct), True))\n self.learn.xb = tuple(xbi[...,idxs] for xbi in self.learn.xb)",
"_____no_output_____"
],
[
"# export\n\nclass BatchLossFilter(Callback):\n \"\"\" Callback that selects the hardest samples in every batch representing a percentage of the total loss\"\"\"\n\n def __init__(self, loss_perc=1., schedule_func:Optional[callable]=None):\n store_attr()\n\n def before_fit(self):\n self.run = not hasattr(self, \"gather_preds\")\n if not(self.run): return\n self.crit = self.learn.loss_func\n if hasattr(self.crit, 'reduction'): self.red = self.crit.reduction\n\n def before_batch(self):\n if not self.training or self.loss_perc == 1.: return\n with torch.no_grad(): \n if hasattr(self.crit, 'reduction'): setattr(self.crit, 'reduction', 'none')\n self.losses = self.crit(self.learn.model(self.x), self.y)\n if hasattr(self.crit, 'reduction'): setattr(self.crit, 'reduction', self.red)\n self.losses /= self.losses.sum()\n idxs = torch.argsort(self.losses, descending=True)\n if self.schedule_func is not None: loss_perc = self.loss_perc * self.schedule_func(self.pct_train)\n else: loss_perc = self.loss_perc\n cut_idx = torch.argmax((self.losses[idxs].cumsum(0) > loss_perc).float())\n idxs = idxs[:cut_idx]\n self.learn.xb = tuple(xbi[idxs] for xbi in self.learn.xb)\n self.learn.yb = tuple(ybi[idxs] for ybi in self.learn.yb)\n\n def after_fit(self):\n if hasattr(self.learn.loss_func, 'reduction'): setattr(self.learn.loss_func, 'reduction', self.red)",
"_____no_output_____"
],
[
"# export\n\nclass RandomWeightLossWrapper(Callback):\n\n def before_fit(self):\n self.run = not hasattr(self, \"gather_preds\")\n if not(self.run): return\n self.crit = self.learn.loss_func\n if hasattr(self.crit, 'reduction'): self.red = self.crit.reduction\n self.learn.loss_func = self._random_weight_loss\n\n def _random_weight_loss(self, input: Tensor, target: Tensor) -> Tensor:\n if self.training:\n setattr(self.crit, 'reduction', 'none')\n loss = self.crit(input, target)\n setattr(self.crit, 'reduction', self.red)\n rw = torch.rand(input.shape[0], device=input.device)\n rw /= rw.sum()\n non_red_loss = loss * rw\n return non_red_loss.sum()\n else:\n return self.crit(input, target)\n\n def after_fit(self):\n if hasattr(self.crit, 'reduction'): setattr(self.crit, 'reduction', self.red)\n self.learn.loss_func = self.crit",
"_____no_output_____"
],
[
"# export\n\nclass SamplerWithReplacement(Callback):\n \"\"\" Callback that selects a percentage of samples and/ or sequence steps with replacement from each training batch\"\"\"\n\n def before_fit(self):\n self.run = not hasattr(self, \"gather_preds\")\n if not(self.run): return\n\n self.old_get_idxs = self.learn.dls.train.get_idxs\n self.learn.dls.train.get_idxs = self._get_idxs\n\n def _get_idxs(self):\n dl = self.learn.dls.train\n if dl.n==0: return []\n if dl.weights is not None:\n return np.random.choice(dl.n, dl.n, p=dl.weights)\n idxs = Inf.count if dl.indexed else Inf.nones\n if dl.n is not None: idxs = np.random.choice(dl.n,dl.n,True)\n if dl.shuffle: idxs = dl.shuffle_fn(idxs)\n return idxs\n\n def after_fit(self):\n self.learn.dls.train.get_idxs = self.old_get_idxs",
"_____no_output_____"
],
[
"# export\n\nclass BatchMasker(Callback):\n \"\"\" Callback that applies a random mask to each sample in a training batch\n\n Args:\n ====\n r: probability of masking.\n subsequence_mask: apply a mask to random subsequences.\n lm: average mask len when using stateful (geometric) masking.\n stateful: geometric distribution is applied so that average mask length is lm.\n sync: all variables have the same masking.\n variable_mask: apply a mask to random variables. Only applicable to multivariate time series.\n future_mask: used to train a forecasting model.\n schedule_func: if a scheduler is passed, it will modify the probability of masking during training.\n \"\"\"\n\n def __init__(self, r:float=.15, lm:int=3, stateful:bool=True, sync:bool=False, subsequence_mask:bool=True, \n variable_mask:bool=False, future_mask:bool=False, schedule_func:Optional[callable]=None):\n store_attr()\n\n def before_fit(self):\n self.run = not hasattr(self, \"gather_preds\")\n if not(self.run): return\n\n def before_batch(self):\n if not self.training: return\n r = self.r * self.schedule_func(self.pct_train) if self.schedule_func is not None else self.r\n mask = create_mask(self.x, r=r, lm=self.lm, stateful=self.stateful, sync=self.sync, \n subsequence_mask=self.subsequence_mask, variable_mask=self.variable_mask, future_mask=self.future_mask)\n self.learn.xb = (self.xb[0].masked_fill(mask, 0),)\n # In my tests, mask-based compensation doesn't seem to be important. ??\n # mean_per_seq = (torch.max(torch.ones(1, device=mask.device), torch.sum(mask, dim=-1).unsqueeze(-1)) / mask.shape[-1])\n # self.learn.xb = (self.xb[0].masked_fill(mask, 0) / (1 - mean_per_seq), )",
"_____no_output_____"
],
[
"# export\n\nclass SamplerWithReplacement(Callback):\n \"\"\" Callback that modify the sampler to select a percentage of samples and/ or sequence steps with replacement from each training batch\"\"\"\n\n def before_fit(self):\n self.run = not hasattr(self, \"gather_preds\")\n if not(self.run): return\n\n self.old_get_idxs = self.learn.dls.train.get_idxs\n self.learn.dls.train.get_idxs = self._get_idxs\n\n def _get_idxs(self):\n dl = self.learn.dls.train\n if dl.n==0: return []\n if dl.weights is not None:\n return np.random.choice(dl.n, dl.n, p=dl.weights)\n idxs = Inf.count if dl.indexed else Inf.nones\n if dl.n is not None: idxs = np.random.choice(dl.n,dl.n,True)\n if dl.shuffle: idxs = dl.shuffle_fn(idxs)\n return idxs\n\n def after_fit(self):\n self.learn.dls.train.get_idxs = self.old_get_idxs",
"_____no_output_____"
],
[
"#hide\nout = create_scripts(); beep(out)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e733afc9c762b83a0c66fbfe186c6c1fc071ca09 | 468,078 | ipynb | Jupyter Notebook | notebooks/lab10_kmeans_clustering.ipynb | tiago-oom/Data-Mining-21-22 | eb250df37511ca1d2547be098ed477ba7f5996e9 | [
"MIT"
] | null | null | null | notebooks/lab10_kmeans_clustering.ipynb | tiago-oom/Data-Mining-21-22 | eb250df37511ca1d2547be098ed477ba7f5996e9 | [
"MIT"
] | null | null | null | notebooks/lab10_kmeans_clustering.ipynb | tiago-oom/Data-Mining-21-22 | eb250df37511ca1d2547be098ed477ba7f5996e9 | [
"MIT"
] | null | null | null | 388.124378 | 50,272 | 0.921902 | [
[
[
"from os.path import join\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport matplotlib.cm as cm\nfrom sklearn.metrics import silhouette_score, silhouette_samples\nfrom sklearn.cluster import KMeans\n\nsns.set()",
"_____no_output_____"
]
],
[
[
"## Import preprocessed data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv'))",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"# Splitting feature names into groups\nnon_metric_features = df.columns[df.columns.str.startswith('x')]\npc_features = df.columns[df.columns.str.startswith('PC')]\nmetric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')]",
"_____no_output_____"
]
],
[
[
"## K-Means Clustering\nWhat is K-Means clustering? How does it work?\n\n### How is it computed?\n",
"_____no_output_____"
],
[
"### Characteristics:\n- *Number of clusters* need to be set apriori\n- One of the *fastest* clustering algorithms\n- The results *depend on the initialization* (stochastic)\n- Prone to *local optima*\n- Favors *convex* (round shape) and *isotropic* (same shape) clusters",
"_____no_output_____"
],
[
"### How to apply K-Means clustering?",
"_____no_output_____"
]
],
[
[
"kmclust = KMeans(n_clusters=5, init='random', n_init=1, random_state=None)\n\n# n_clusters=8,\n# *,\n# init='k-means++',\n# n_init=10, --> Number of time the k-means algorithm will run with different centroid seeds. Final results will be the best output of\n# n_init consecutive runs in terms of inertia.\n# max_iter=300,\n# tol=0.0001,\n# precompute_distances='deprecated',\n# verbose=0,\n# random_state=None, --> allows me to duplicate my randomness\n# copy_x=True,\n# n_jobs='deprecated',\n# algorithm='auto',\n\n \n# the fit method\nkmclust.fit(df[metric_features])",
"_____no_output_____"
],
[
"# the predict method\nkmclust.predict(df[metric_features])",
"_____no_output_____"
],
[
"# the transform method\npd.DataFrame(kmclust.transform(df[metric_features]))",
"_____no_output_____"
]
],
[
[
"### How can we improve the initialization step?",
"_____no_output_____"
]
],
[
[
"# Better initialization method and provide more n_init\n\n# init='k-means++' & n_init=15\nkmclust = KMeans(n_clusters=5, init='k-means++', n_init=15, random_state=1)\nkmclust.fit(df[metric_features])",
"_____no_output_____"
],
[
"kmclust.predict(df[metric_features])\n# Returns the same result everytime",
"_____no_output_____"
]
],
[
[
"*init='k-means++'* initializes the centroids to be (generally) distant from each other, leading to probably better results than random initialization. *n_init=K* allows to initialize KMeans K times and pick the best clustering in terms of Inertia. This can been shown in the link below.\n\n**Empirical evaluation of the impact of k-means initialization:**\n\nhttps://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_stability_low_dim_dense.html#sphx-glr-auto-examples-cluster-plot-kmeans-stability-low-dim-dense-py",
"_____no_output_____"
],
[
"### Defining the number of clusters:",
"_____no_output_____"
]
],
[
[
"range_clusters = range(2, 11) # Goes from 2 to 10",
"_____no_output_____"
],
[
"inertia = []\n\nfor n_clus in range_clusters: # iterate over desired ncluster range\n \n kmclust = KMeans(n_clusters=n_clus, init='k-means++', n_init=15, random_state=42)\n kmclust.fit(df[metric_features])\n \n inertia.append(kmclust.inertia_) # save the inertia of the given cluster solution\n\nprint(inertia)",
"[67166.77874914452, 52973.55241023803, 46736.65485715153, 42189.58282488099, 39883.766299443174, 37885.24251203413, 36277.77069520268, 34921.02592728754, 33556.1286728264]\n"
]
],
[
[
"**Inertia (within-cluster sum-of-squares distance) Formula:**\n$$\\sum_{j=0}^{C}\\sum_{i=0}^{n_j}(||x_i - \\mu_j||^2)$$\n, where:\n\n$C$: Set of identified clusters.\n\n$n_j$: Set of observations belonging to cluster $j$.\n\n$x_i$: Observation $i$.\n\n$\\mu_j$: Centroid of cluster $j$.",
"_____no_output_____"
]
],
[
[
"# The inertia plot\nplt.figure(figsize=(9,5))\n\nplt.plot(pd.Series(inertia, index = range_clusters))\n\nplt.ylabel(\"Inertia: SSw\")\nplt.xlabel(\"Number of clusters\")\nplt.title(\"Inertia plot over clusters\", size=15)\nplt.show()",
"_____no_output_____"
],
[
"# I would say the elbow is on number of cluster = 4, so we can try with 3, 4 or 5 and evaluate the results",
"_____no_output_____"
]
],
[
[
"**Silhouette Coefficient formula for a single sample:**\n$$s = \\frac{b - a}{max(a, b)}$$\n, where:\n- $a$: The mean distance between a sample and all other points in the same cluster.\n- $b$: The mean distance between a sample and all other points in the next nearest cluster",
"_____no_output_____"
]
],
[
[
"# Adapted from:\n# https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py\n\n# Storing average silhouette metric\navg_silhouette = []\n\nfor nclus in range_clusters:\n \n # Skip nclus == 1, start with 2 clusters\n if nclus == 1:\n continue\n \n # Create a figure\n fig = plt.figure(figsize=(13, 7))\n\n # Initialize the KMeans object with n_clusters value and a random generator\n # seed of 10 for reproducibility.\n kmclust = KMeans(n_clusters=nclus, init='k-means++', n_init=15, random_state=1)\n # Use the fit_predict argument\n cluster_labels = kmclust.fit_predict(df[metric_features])\n\n # The silhouette_score gives the average value for all the samples.\n # This gives a perspective into the density and separation of the formed clusters\n silhouette_avg = silhouette_score(df[metric_features], cluster_labels)\n avg_silhouette.append(silhouette_avg)\n \n print(f\"For n_clusters = {nclus}, the average silhouette_score is : {silhouette_avg}\")\n\n # -------------------------------------------------------------------------------------\n \n # Compute the silhouette scores for each sample\n sample_silhouette_values = silhouette_samples(df[metric_features], cluster_labels)\n\n y_lower = 10\n \n for i in range(nclus):\n \n # Aggregate the silhouette scores for samples belonging to cluster i, and sort them\n ith_cluster_silhouette_values = sample_silhouette_values[cluster_labels == i]\n ith_cluster_silhouette_values.sort()\n \n # Get y_upper to demarcate silhouette y range size\n size_cluster_i = ith_cluster_silhouette_values.shape[0]\n y_upper = y_lower + size_cluster_i\n \n # Filling the silhouette\n color = cm.nipy_spectral(float(i) / nclus)\n plt.fill_betweenx(np.arange(y_lower, y_upper),\n 0, ith_cluster_silhouette_values,\n facecolor=color, edgecolor=color, alpha=0.7)\n\n # Label the silhouette plots with their cluster numbers at the middle\n plt.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))\n\n # Compute the new y_lower for next plot\n y_lower = y_upper + 10 # 10 for the 0 samples\n\n \n plt.title(\"The silhouette plot for the various clusters.\")\n plt.xlabel(\"The silhouette coefficient values\")\n plt.ylabel(\"Cluster label\")\n\n # The vertical line for average silhouette score of all the values\n plt.axvline(x=silhouette_avg, color=\"red\", linestyle=\"--\")\n \n # The silhouette coefficient can range from -1, 1\n xmin, xmax = np.round(sample_silhouette_values.min() -0.1, 2), np.round(sample_silhouette_values.max() + 0.1, 2)\n plt.xlim([xmin, xmax])\n \n # The (nclus+1)*10 is for inserting blank space between silhouette\n # plots of individual clusters, to demarcate them clearly.\n plt.ylim([0, len(df[metric_features]) + (nclus + 1) * 10])\n\n plt.yticks([]) # Clear the yaxis labels / ticks\n plt.xticks(np.arange(xmin, xmax, 0.1))",
"For n_clusters = 2, the average silhouette_score is : 0.2216925624241448\nFor n_clusters = 3, the average silhouette_score is : 0.23707766584584286\nFor n_clusters = 4, the average silhouette_score is : 0.21036821905415626\nFor n_clusters = 5, the average silhouette_score is : 0.19514283937028068\nFor n_clusters = 6, the average silhouette_score is : 0.18736793579734382\nFor n_clusters = 7, the average silhouette_score is : 0.16983500681630878\nFor n_clusters = 8, the average silhouette_score is : 0.16670886503251847\nFor n_clusters = 9, the average silhouette_score is : 0.1549596056099329\nFor n_clusters = 10, the average silhouette_score is : 0.15066928575504182\n"
],
[
"# Analyse the graphics:\n# As n_clusters increase there are more negative silhouette values (oservations that are closer to other clusters)\n# More condensed clusters\n# See the maximum silhouette_score --> n_clusters = 3, 4 or 5\n",
"_____no_output_____"
],
[
"# The average silhouette plot\n# The inertia plot\nplt.figure(figsize=(9,5))\n\nplt.plot(pd.Series(avg_silhouette, index = range_clusters))\n\nplt.ylabel(\"Average silhouette\")\nplt.xlabel(\"Number of clusters\")\nplt.title(\"Average silhouette plot over clusters\", size=15)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Final KMeans clustering solution",
"_____no_output_____"
]
],
[
[
"# final cluster solution\nnumber_clusters = 3\nkmclust = KMeans(n_clusters=number_clusters, init='k-means++', n_init=15, random_state=1)\nkm_labels = kmclust.fit_predict(df[metric_features])\nkm_labels",
"_____no_output_____"
],
[
"# Characterizing the final clusters\ndf_concat = pd.concat((df, pd.Series(km_labels, name='labels')), axis=1)\ndf_concat.groupby('labels').mean()",
"_____no_output_____"
]
],
[
[
"### How can we combine the 2 algorithms?",
"_____no_output_____"
],
[
"## Exercise:\n**Apply Hierarchical Clustering and K-means on the Principal Components.**\n\nChoose the appropriate parameters and number of clusters for each algorithm and interpret each cluster based on the Principal Components interpretation:",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e733b4d88c8314040cfc323470e733d0af2750c4 | 1,137 | ipynb | Jupyter Notebook | notebooks/python/solutions/E2.1.ipynb | rses-dl-course-durham/rses-dl-course-durham.github.io | eaa674e612bf7e4ec21ee2038ba86fe2060d67ac | [
"CC-BY-4.0"
] | null | null | null | notebooks/python/solutions/E2.1.ipynb | rses-dl-course-durham/rses-dl-course-durham.github.io | eaa674e612bf7e4ec21ee2038ba86fe2060d67ac | [
"CC-BY-4.0"
] | null | null | null | notebooks/python/solutions/E2.1.ipynb | rses-dl-course-durham/rses-dl-course-durham.github.io | eaa674e612bf7e4ec21ee2038ba86fe2060d67ac | [
"CC-BY-4.0"
] | null | null | null | 24.717391 | 86 | 0.515391 | [
[
[
"# E2.1 Solution\n\n```python\nmodel = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, (3,3), padding='same', activation=tf.nn.relu,\n input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPool2D((2, 2), strides=2),\n tf.keras.layers.Conv2D(64, (3,3), padding='same', activation=tf.nn.relu),\n tf.keras.layers.MaxPool2D((2, 2), strides=2),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation=tf.nn.relu),\n tf.keras.layers.Dense(10)\n])\n\n```",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
e733c491bc5cc66db897ba4a9a932de913bea511 | 4,723 | ipynb | Jupyter Notebook | Chatbot JARVIS.ipynb | douglasparism/Hello-World | d1083a5d532174065df5d367c5b496ac33bca97c | [
"Apache-2.0"
] | null | null | null | Chatbot JARVIS.ipynb | douglasparism/Hello-World | d1083a5d532174065df5d367c5b496ac33bca97c | [
"Apache-2.0"
] | null | null | null | Chatbot JARVIS.ipynb | douglasparism/Hello-World | d1083a5d532174065df5d367c5b496ac33bca97c | [
"Apache-2.0"
] | null | null | null | 23.497512 | 194 | 0.514292 | [
[
[
"#Chatbot de bienvenida \n\nfrom time import sleep\ndef print_words(sentence):\n for word in sentence.split():\n for l in word:\n sleep(.1)\n print(l, end = '')\n print(end = ' ')",
"_____no_output_____"
],
[
"en = \"gucci gang gucci gang\"\nprint_words(en)",
"gucci gang gucci gang "
],
[
"prompt = ' >> '",
"_____no_output_____"
],
[
"nombre_usuario = input(prompt)",
" >> jarvis\n"
],
[
"nombre_usuario",
"_____no_output_____"
],
[
"respuesta = \"Mucho gusto en conocerte %s, my name is earl\" %nombre_usuario",
"_____no_output_____"
],
[
"print(respuesta)",
"Mucho gusto en conocerte jarvis, my name is earl\n"
],
[
"%run welcome.py",
"Hola, bienvenida o bienvenido a simulación matemática. Me puedes llamar Alice y me gustaría saber un poco acerca de ti, por ejemplo ¿Cuál es tu nombre? >> Doug\nGusto en conocerte Doug, espero que te guste el curso. Ahora, me gustaría saber donde vives Doug, ¿cuál es tu ciudad? >> GDL\nMmm... ¿eso es una ciudad real? GDL. Bueno, después lo investigo. ¿Qué edad tienes? >> 19\n¿Te ha gustado esta bienvenida? >> Si\nMuy bien Doug, dices que Si te gustó la bienvenida. Tú vives en GDL, ya dije que voy a investigar donde es eso. Y además estas por cumplir 20 años. Fue un gusto conocerte, hasta pronto! "
]
],
[
[
"## Actividad: Realizar un chatbot ",
"_____no_output_____"
]
],
[
[
"from time import sleep\n\ndef print_words(sentence):\n for word in sentence.split():\n for l in word:\n sleep(.05)\n print(l, end = '')\n print(end = ' ')\n \ns1 = 'Hola, bienvenido, mi nombre es Jarvis'\nprint_words(s1)\ns2 = '¿Cuál es tu nombre?'\nprint_words(s2)\nprompt = ' >> '\nnombre_usuario = input(prompt)\ns3 = \"Que tal \" + nombre_usuario + ' me gustaria saber mas de ti...'\nprint_words(s3)\ns4 = 'De donde eres %s, ¿cuál es tu ciudad de origen?' % nombre_usuario\nprint_words(s4)\nlugar = input(prompt)\ns5 = 'Ok, entonces vienes de %s, ¿Cuantos años tienes?' %lugar\nprint_words(s5)\nedad = input(prompt)\ns6 = '¿Cuanto llevas en ITESO?'\nprint_words(s6)\ntimepo = input(prompt)\nsfinal = ' Muy bien %s, llevas %s en ITESO, bastante...' % (nombre_usuario, timepo)\nsfinal2 = ' Eres de %s, me gusta ese lugar. Planeo visitarlo luego'% lugar\nsfinal3 = 'Que tengas un buen dia!\"\nprint_words(sfinal)\nprint_words(sfinal2)\nprint_words(sfinal3)\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e733cb230a41cf2d0f91921c46ae02dbdf173b38 | 190,264 | ipynb | Jupyter Notebook | Image Classification - Keras TF2.0.ipynb | msunil10052/Image-Classification-1 | 5feddb50ba75a0237edaf356de483217baea720f | [
"MIT"
] | null | null | null | Image Classification - Keras TF2.0.ipynb | msunil10052/Image-Classification-1 | 5feddb50ba75a0237edaf356de483217baea720f | [
"MIT"
] | null | null | null | Image Classification - Keras TF2.0.ipynb | msunil10052/Image-Classification-1 | 5feddb50ba75a0237edaf356de483217baea720f | [
"MIT"
] | null | null | null | 190,264 | 190,264 | 0.906556 | [
[
[
"This notebook trains a neural network model to classify images of clothing, like sneakers and shirts. \n\nThis notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow.",
"_____no_output_____"
]
],
[
[
"try:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass",
"TensorFlow 2.x selected.\n"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\n# TensorFlow and tf.keras\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Helper libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)",
"2.0.0\n"
]
],
[
[
"## Import the Fashion MNIST dataset",
"_____no_output_____"
]
],
[
[
"fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n8192/5148 [===============================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n"
]
],
[
[
"Loading the dataset returns four NumPy arrays:\n\n* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.\n* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.\n\nThe images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:\n\n<table>\n <tr>\n <th>Label</th>\n <th>Class</th>\n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td>\n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td>\n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td>\n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td>\n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td>\n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td>\n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td>\n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td>\n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td>\n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td>\n </tr>\n</table>\n\nEach image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:",
"_____no_output_____"
]
],
[
[
"class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"## Explore the data\n\nLet's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:",
"_____no_output_____"
]
],
[
[
"train_images.shape",
"_____no_output_____"
]
],
[
[
"Likewise, there are 60,000 labels in the training set:",
"_____no_output_____"
]
],
[
[
"len(train_labels)",
"_____no_output_____"
]
],
[
[
"Each label is an integer between 0 and 9:",
"_____no_output_____"
]
],
[
[
"train_labels[0:2]",
"_____no_output_____"
]
],
[
[
"There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:",
"_____no_output_____"
]
],
[
[
"test_images.shape",
"_____no_output_____"
]
],
[
[
"And the test set contains 10,000 images labels:",
"_____no_output_____"
]
],
[
[
"len(test_labels)",
"_____no_output_____"
]
],
[
[
"## Preprocess the data\n\nThe data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.imshow(train_images[0])\nplt.colorbar()\nplt.grid(False)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the *training set* and the *testing set* be preprocessed in the same way:",
"_____no_output_____"
]
],
[
[
"train_images = train_images / 255.0\n\ntest_images = test_images / 255.0",
"_____no_output_____"
]
],
[
[
"To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the *training set* and display the class name below each image.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Build the model\n\nBuilding the neural network requires configuring the layers of the model, then compiling the model.",
"_____no_output_____"
],
[
"### Set up the layers\n\nThe basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.\n\nMost of deep learning consists of chaining together simple layers. Most layers, such as `tf.keras.layers.Dense`, have parameters that are learned during training.",
"_____no_output_____"
]
],
[
[
"model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation='relu'),\n keras.layers.Dense(10, activation='softmax')\n])",
"_____no_output_____"
]
],
[
[
"The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.\n\nAfter the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected, or fully connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer is a 10-node *softmax* layer that returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.\n\n### Compile the model\n\nBefore the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:\n\n* *Loss function* —This measures how accurate the model is during training. You want to minimize this function to \"steer\" the model in the right direction.\n* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.\n* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## Train the model\n\nTraining the neural network model requires the following steps:\n\n1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.\n2. The model learns to associate images and labels.\n3. You ask the model to make predictions about a test set—in this example, the `test_images` array. Verify that the predictions match the labels from the `test_labels` array.\n\nTo start training, call the `model.fit` method—so called because it \"fits\" the model to the training data:",
"_____no_output_____"
]
],
[
[
"model.fit(train_images, train_labels, epochs=10)",
"Train on 60000 samples\nEpoch 1/10\n60000/60000 [==============================] - 6s 96us/sample - loss: 0.4973 - accuracy: 0.8264\nEpoch 2/10\n60000/60000 [==============================] - 5s 76us/sample - loss: 0.3788 - accuracy: 0.8635\nEpoch 3/10\n60000/60000 [==============================] - 5s 81us/sample - loss: 0.3391 - accuracy: 0.8760\nEpoch 4/10\n60000/60000 [==============================] - 5s 82us/sample - loss: 0.3134 - accuracy: 0.8846\nEpoch 5/10\n60000/60000 [==============================] - 5s 81us/sample - loss: 0.2957 - accuracy: 0.8916\nEpoch 6/10\n60000/60000 [==============================] - 5s 80us/sample - loss: 0.2807 - accuracy: 0.8969\nEpoch 7/10\n60000/60000 [==============================] - 5s 78us/sample - loss: 0.2679 - accuracy: 0.9011\nEpoch 8/10\n60000/60000 [==============================] - 5s 75us/sample - loss: 0.2562 - accuracy: 0.9050\nEpoch 9/10\n60000/60000 [==============================] - 5s 78us/sample - loss: 0.2462 - accuracy: 0.9097\nEpoch 10/10\n60000/60000 [==============================] - 5s 79us/sample - loss: 0.2385 - accuracy: 0.9106\n"
]
],
[
[
"As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.",
"_____no_output_____"
],
[
"## Evaluate accuracy\n\nNext, compare how the model performs on the test dataset:",
"_____no_output_____"
]
],
[
[
"test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\n\nprint('\\nTest accuracy:', test_acc)",
"10000/1 - 1s - loss: 0.2432 - accuracy: 0.8808\n\nTest accuracy: 0.8808\n"
]
],
[
[
"It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents *overfitting*. Overfitting is when a machine learning model performs worse on new, previously unseen inputs than on the training data.",
"_____no_output_____"
],
[
"## Make predictions\n\nWith the model trained, you can use it to make predictions about some images.",
"_____no_output_____"
]
],
[
[
"predictions = model.predict(test_images)",
"_____no_output_____"
]
],
[
[
"Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:",
"_____no_output_____"
]
],
[
[
"predictions[0]",
"_____no_output_____"
]
],
[
[
"A prediction is an array of 10 numbers. They represent the model's \"confidence\" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value:",
"_____no_output_____"
]
],
[
[
"np.argmax(predictions[0])",
"_____no_output_____"
],
[
"predictions[0]",
"_____no_output_____"
]
],
[
[
"So, the model is most confident that this image is an ankle boot, or `class_names[9]`. Examining the test label shows that this classification is correct:",
"_____no_output_____"
]
],
[
[
"test_labels[0]",
"_____no_output_____"
]
],
[
[
"Graph this to look at the full set of 10 class predictions.",
"_____no_output_____"
]
],
[
[
"def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array, true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n\n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n\n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array, true_label[i]\n plt.grid(False)\n plt.xticks(range(10))\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1])\n predicted_label = np.argmax(predictions_array)\n\n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')",
"_____no_output_____"
]
],
[
[
"Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.",
"_____no_output_____"
]
],
[
[
"i = 0\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions[i], test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions[i], test_labels)\nplt.show()",
"_____no_output_____"
],
[
"i = 12\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions[i], test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions[i], test_labels)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let's plot several images with their predictions. Note that the model can be wrong even when very confident.",
"_____no_output_____"
]
],
[
[
"# Plot the first X test images, their predicted labels, and the true labels.\n# Color correct predictions in blue and incorrect predictions in red.\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions[i], test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions[i], test_labels)\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Finally, use the trained model to make a prediction about a single image.",
"_____no_output_____"
]
],
[
[
"# Grab an image from the test dataset.\nimg = test_images[1]\n\nprint(img.shape)",
"(28, 28)\n"
]
],
[
[
"`tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:",
"_____no_output_____"
]
],
[
[
"# Add the image to a batch where it's the only member.\nimg = (np.expand_dims(img,0))\n\nprint(img.shape)",
"(1, 28, 28)\n"
]
],
[
[
"Now predict the correct label for this image:",
"_____no_output_____"
]
],
[
[
"predictions_single = model.predict(img)\n\nprint(predictions_single)",
"[[1.25817623e-05 2.65087269e-13 9.87852097e-01 1.05317500e-11\n 1.08589865e-02 7.78088184e-12 1.27634045e-03 7.59472041e-17\n 6.24028745e-11 9.28564612e-15]]\n"
],
[
"plot_value_array(1, predictions_single[0], test_labels)\n_ = plt.xticks(range(10), class_names, rotation=45)",
"_____no_output_____"
]
],
[
[
"`model.predict` returns a list of lists—one list for each image in the batch of data. Grab the predictions for our (only) image in the batch:",
"_____no_output_____"
]
],
[
[
"np.argmax(predictions_single[0])",
"_____no_output_____"
]
],
[
[
"And the model predicts a label as expected.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e733d6daa7bb97eb12c7d7f53f19101bfa2ab733 | 9,208 | ipynb | Jupyter Notebook | Lectures/Basics/2. CT Signals.ipynb | lev1khachatryan/ASDS_DSP | 9059d737f6934b81a740c79b33756f7ec9ededb3 | [
"MIT"
] | 1 | 2020-12-29T18:02:13.000Z | 2020-12-29T18:02:13.000Z | Lectures/Basics/2. CT Signals.ipynb | lev1khachatryan/ASDS_DSP | 9059d737f6934b81a740c79b33756f7ec9ededb3 | [
"MIT"
] | null | null | null | Lectures/Basics/2. CT Signals.ipynb | lev1khachatryan/ASDS_DSP | 9059d737f6934b81a740c79b33756f7ec9ededb3 | [
"MIT"
] | null | null | null | 21.615023 | 258 | 0.471221 | [
[
[
"# <div align=\"center\">Basic CT ( Continuous Time ) Signals</div>\n---------------------------------------------------------------------\n\nyou can Find me on Github:\n> ###### [ GitHub](https://github.com/lev1khachatryan)",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems.",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Unit Impulse or Delta Function</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"A signal, which satisfies the condition, $δ(t)=lim_{ϵ→∞}x(t)$ is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one.",
"_____no_output_____"
],
[
"<img src='asset/2/1.png'>",
"_____no_output_____"
],
[
"***Properties of Unit Impulse Signal***",
"_____no_output_____"
],
[
"* δ(t) is an even signal.\n\n\n* δ(t) is an example of neither energy nor power (NENP) signal.\n\n\n* Area of unit impulse signal can be written as:\n \n $\\int_{-∞} ^ {+∞} lim_{ϵ→∞}x(t) \\mathrm{d}t = 1$\n \n \n* Weight or strength of the signal can be written as:\n \n $y(t)=Aδ(t)$",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Unit Step Signal</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"A signal, which satisfies the following two conditions − \n\n* $U(t)=1(whent≥0)$\n\n\n* $U(t)=0(whent<0)$",
"_____no_output_____"
],
[
"It has the property of showing discontinuity at t = 0. At the point of discontinuity, the signal value is given by the average of signal value. This signal has been taken just before and after the point of discontinuity (according to Gibb’s Phenomena).",
"_____no_output_____"
],
[
"<img src='asset/2/2.png'>",
"_____no_output_____"
],
[
"If we add a step signal to another step signal that is time scaled, then the result will be unity. It is a power type signal and the value of power is 0.5. The RMS (Root mean square) value is 0.707 and its average value is also 0.5",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Ramp Signal</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"Integration of step signal results in a Ramp signal. It is represented by r(t). Ramp signal also satisfies the condition $r(t)=\\int_{−∞}^{t}U(t)dt=tU(t)$. It is neither energy nor power (NENP) type signal.",
"_____no_output_____"
],
[
"<img src='asset/2/3.png'>",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Parabolic Signal</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"Integration of Ramp signal leads to parabolic signal. It is represented by p(t). Parabolic signal also satisfies he condition $p(t)=\\int_{−∞}^{t}r(t)dt=(t^2/2)U(t)$ . It is neither energy nor Power (NENP) type signal.",
"_____no_output_____"
],
[
"<img src='asset/2/4.png'>",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Signum Function</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"It is a power type signal. Its power value and RMS (Root mean square) values, both are 1. Average value of signum function is zero.",
"_____no_output_____"
],
[
"<img src='asset/2/5.png'>",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Sinusoidal Signal</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"A signal, which is continuous in nature is known as continuous signal. General format of a sinusoidal signal is:\n\n$x(t)=Asin(ωt+ϕ)$\n\nHere,\n\nA = amplitude of the signal\n\nω = Angular frequency of the signal (Measured in radians)\n\nφ = Phase angle of the signal (Measured in radians)\n\nThe tendency of this signal is to repeat itself after certain period of time, thus is called periodic signal. The time period of signal is given as $T = 2 \\pi / \\omega$\n\nThe diagrammatic view of sinusoidal signal is shown below.\n\n<img src='asset/2/6.png'>",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Rectangular Function</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"A signal is said to be rectangular function type if it satisfies the following condition:\n\n$\\pi(t / \\tau) =\n \\begin{cases}\n 1 & \\quad \\text{if } t \\leq \\tau/2\\\\\n 0 & \\quad \\text{otherwise }\n \\end{cases}\n$\n\n<img src='asset/2/7.png'>\n\nBeing symmetrical about Y-axis, this signal is termed as even signal.",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"# <div align=\"center\">Triangular Pulse Signal</div>\n---------------------------------------------------------------------",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"Any signal, which satisfies the following condition, is known as triangular signal.",
"_____no_output_____"
],
[
"<img src='asset/2/8.png'>",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e733e2c6f990006432c249061f68c79d2bf708f3 | 1,090 | ipynb | Jupyter Notebook | rsmtool/notebooks/intermediate_file_paths.ipynb | srhrshr/rsmtool | 4317f804de82ccb4965c2e7bb185c6ef41458f8e | [
"Apache-2.0"
] | 64 | 2016-04-06T15:57:24.000Z | 2022-03-24T14:17:45.000Z | rsmtool/notebooks/intermediate_file_paths.ipynb | srhrshr/rsmtool | 4317f804de82ccb4965c2e7bb185c6ef41458f8e | [
"Apache-2.0"
] | 479 | 2016-04-07T03:04:09.000Z | 2022-03-10T00:39:22.000Z | rsmtool/notebooks/intermediate_file_paths.ipynb | srhrshr/rsmtool | 4317f804de82ccb4965c2e7bb185c6ef41458f8e | [
"Apache-2.0"
] | 22 | 2016-04-10T06:35:28.000Z | 2022-02-26T05:03:47.000Z | 18.793103 | 114 | 0.555963 | [
[
[
"## Links to intermediate files",
"_____no_output_____"
],
[
"Click on the hyperlinks below to see the intermediate experiment files generated as part of this experiment.",
"_____no_output_____"
]
],
[
[
"from rsmtool.utils.notebook import show_files",
"_____no_output_____"
],
[
"show_files(output_dir, experiment_id, file_format)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e733e9adcbcf0e88e82416f0520e4c89b5e768f2 | 856 | ipynb | Jupyter Notebook | testing-functions.ipynb | Bachmann1234/data-testing-tutorial | c35b76bbb0f5ca175c88231a73b67969aec34af8 | [
"MIT"
] | 3 | 2019-06-21T11:23:35.000Z | 2021-09-09T14:09:53.000Z | testing-functions.ipynb | hugobowne/data-testing-tutorial | b431b943e5ce28dfac64ca05aa8614b994b11bfc | [
"MIT"
] | null | null | null | testing-functions.ipynb | hugobowne/data-testing-tutorial | b431b943e5ce28dfac64ca05aa8614b994b11bfc | [
"MIT"
] | 2 | 2019-08-15T11:22:41.000Z | 2020-04-01T19:58:37.000Z | 16.461538 | 34 | 0.484813 | [
[
[
"def add(x, y):\n return x + y\n\nassert add(2, 3) == 5\nassert add(2, 3) != 4",
"_____no_output_____"
],
[
"def ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e733efd60b55396008c18fa879ebcdff3d3185bd | 58,720 | ipynb | Jupyter Notebook | models/bull_bear.ipynb | cali-in-cau/auto-ta-ml | 0b32b5350120001b3eb31fcb9043b66c088a2a6f | [
"Xnet",
"X11"
] | 5 | 2021-01-22T10:37:49.000Z | 2022-01-15T09:37:19.000Z | models/bull_bear.ipynb | cali-in-cau/auto-ta-ml | 0b32b5350120001b3eb31fcb9043b66c088a2a6f | [
"Xnet",
"X11"
] | null | null | null | models/bull_bear.ipynb | cali-in-cau/auto-ta-ml | 0b32b5350120001b3eb31fcb9043b66c088a2a6f | [
"Xnet",
"X11"
] | 3 | 2021-01-23T11:41:58.000Z | 2021-06-29T11:24:26.000Z | 164.022346 | 38,616 | 0.877163 | [
[
[
"!pip install fastai --upgrade -q\n#!pip install google",
"_____no_output_____"
],
[
"from fastai.vision.all import *",
"_____no_output_____"
],
[
"#!unzip data/bullbear.zip -d data/bullbear",
"_____no_output_____"
],
[
"chart_class = 'BULL','BEAR'\nroot_dir = 'data'\nbase_dir = root_dir + '/bullbear'\npath = Path(base_dir)\nprint(path)",
"data/bullbear\n"
],
[
"charts = DataBlock(blocks=(ImageBlock, CategoryBlock),\n get_items=get_image_files,\n get_y=parent_label,\n splitter=RandomSplitter(valid_pct=0.2, seed=42),\n item_tfms=Resize(400))\n #item_tfms=RandomResizedCrop(224, min_scale=0.5)\n #batch_tfms=aug_transforms()",
"_____no_output_____"
],
[
"#dls = charts.dataloaders(path,batch_size=20)\ndls = charts.dataloaders(path)",
"_____no_output_____"
],
[
"dls.train.show_batch(max_n=8, nrows=2)",
"_____no_output_____"
],
[
"import torch\ntorch.cuda.empty_cache()",
"_____no_output_____"
],
[
"\nlearn = cnn_learner(dls, resnet34, loss_func=CrossEntropyLossFlat(), metrics=accuracy)\nlearn.model = torch.nn.DataParallel(learn.model, device_ids=[0, 1,2,3,4,5,])\n#lr_min, lr_steep = learn.lr_find()\nlearn.fine_tune(10)",
"_____no_output_____"
],
[
"#dls.vocab",
"_____no_output_____"
],
[
"interp = ClassificationInterpretation.from_learner(learn)\ninterp.plot_confusion_matrix()\n#interp.most_confused()\ninterp.print_classification_report()",
"_____no_output_____"
],
[
"pkl_name = \"export_bull_bear.pkl\"\nlearn.model = learn.model.module\nlearn.export(pkl_name)\npath = Path()\npath.ls(file_exts=pkl_name)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e733fcf57b76b01210429f28520839fc0ff6acb1 | 19,255 | ipynb | Jupyter Notebook | .ipynb_checkpoints/ Chapter 4 - Classes and Methods-checkpoint.ipynb | hunterluepke/Learn-Python-for-Stats-and-Econ | d580a8e27ba937fc8401ac6d0714b6488ac8bbb6 | [
"MIT"
] | 16 | 2019-01-10T18:54:13.000Z | 2022-01-28T20:07:20.000Z | .ipynb_checkpoints/ Chapter 4 - Classes and Methods-checkpoint.ipynb | hunterluepke/Learn-Python-for-Stats-and-Econ | d580a8e27ba937fc8401ac6d0714b6488ac8bbb6 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/ Chapter 4 - Classes and Methods-checkpoint.ipynb | hunterluepke/Learn-Python-for-Stats-and-Econ | d580a8e27ba937fc8401ac6d0714b6488ac8bbb6 | [
"MIT"
] | 15 | 2019-01-24T17:11:20.000Z | 2021-12-11T01:53:57.000Z | 39.619342 | 749 | 0.565152 | [
[
[
"# Chapter 4: Classes and Methods\n\nSo far, we have dealt only with functions. Functions are convenient because they generalize some exercise given a certain type of input. In the last chapter we created a function that takes the mean value of a list of elements. It may be useful to create a function that is not owned by a class if you are in a hurry, but it is better to develop a habit of building class objects whenever you think you might want to reuse the functions that we have made. To take advantage of a function while scripting in a different file, we can import the file and instantiate a class object that owns these functions. When a function is owned by a class, we refer to this as a method. In this chapter, you will learn how to create a class with methods.\n\n## Arithmetic Class\n\n| New Concepts | Description |\n| --- | --- |\n| Class | Classes are the fundamental element of object oriented programming. Classes provide a template that defines instances of the class. Objects that are instances of a class share attributes defined by the constructor, in addition to other attributes they may share. |\n| function(. . ., \\*args) | Passing \\*args to a function treats the passed arguments as a tuple and performs a specified operation upon the tuple’s elements. |\n\nIt is useful to build a class with a collection of related objects. We will start by building a class that performs basic arthimetic operations. It will include the functions \"add\", \"multiply\", and \"power\". Before we make any methods, however, we must initialize the class as an object itself.\n\nWe start by building the Arithmetic class and describing its __init__ function. This function will be called automatically upon the creation of an instance of the class. The init function will create an object that can be called at any time. \n\nBe sure to place the class at the top of file, just after you import any libraries that you plan to use. Copy the text below to build your first class.",
"_____no_output_____"
]
],
[
[
"#arithmetic.py\n# you may ignore import jdc, used to split class development\n# other cells that edits a class will include the magic command %% add_to\nimport jdc\n\nclass Arithmetic():\n def __init__(self):\n pass",
"_____no_output_____"
]
],
[
[
"We can create an object that is an instance of the class. At the bottom of the script, add:",
"_____no_output_____"
]
],
[
[
"arithmetic = Arithmetic()\nprint(arithmetic)",
"<__main__.Arithmetic object at 0x0000022F00CAF470>\n"
]
],
[
[
"Following the instance of the Arithmetic class with a ‘.’ enables the calling of objects owned by the class.\n\nNext, let's create the _add()_ method.",
"_____no_output_____"
]
],
[
[
"%%add_to Arithmetic\n#arithmetic.py\n# . . . \ndef add(self, *args): \n try: \n total = 0 \n for arg in args: \n total += arg \n return total \n\n except: \n print(\"Pass int or float to add()\")\n\n# make sure you define arithmetic below the script constructing the class \narithmetic = Arithmetic()",
"_____no_output_____"
]
],
[
[
"To account for inputs that cannot be processed, the method begins with try. This will return an error message in cases where integers or floats may not be passed to the method.\n\nThe _add()_ method passes two arguments: self and \\*args. Self is always implicitly passed to a method, so you will only pass one arguments that will be interpreted as part \\*args. The \\*args command accepts an undefined number of arguments. It is returned within the function as a tuple that includes the values passed to add. Using a for-loop, each of the values can be called individually from the tuple. We create a list from the arguments passed using a generator function, summing the list. \n\nPass values to the add method as noted below",
"_____no_output_____"
]
],
[
[
"#aritmetic.py\n# . . . \nprint(arithmetic.add(1,2,3,4,5,6,7,8,9,10))",
"55\n"
]
],
[
[
"We will add two more functions to our class: the multiply and power functions. As with the addition class, we will create a multiply class that multiplies an unspecified number of arguments. ",
"_____no_output_____"
]
],
[
[
"%%add_to Arithmetic\n#arithmetic.py\n# . . . \ndef multiply(self, *args):\n product = 1\n try:\n for arg in args:\n product *= arg\n return product\n except:\n print(\"Pass only int or float to multiply()\")\n\n# make sure you define arithmetic below the script constructing the class \narithmetic = Arithmetic()",
"_____no_output_____"
],
[
"# . . .\nprint(arithmetic.multiply(2,3,4))",
"24\n"
]
],
[
[
"The last method we will create is the exponent function. This one is straight-forward. Pass a base and an exponent to _.power()_ to yield the result a value, a, where $a=Base^{exponent}.$",
"_____no_output_____"
]
],
[
[
"%%add_to Arithmetic\n#arithmetic.py\n# . . . \ndef power(self, base, exponent):\n try:\n value = base ** exponent\n return value\n except:\n print(\"Pass int or flaot for base and exponent\")\n\n# make sure you define arithmetic below the script constructing the class \narithmetic = Arithmetic()",
"_____no_output_____"
],
[
"# . . .\nprint(arithmetic.power(2,3))",
"8\n"
]
],
[
[
"## Stats Class\nNow that you are comfortable with classes, we can build a Stats() class. This will integrate of the core stats functions that we built in the last chapter. We will be making use of this function when we build a program to run ordinary least squares regression, so make sure that this is well ordered.\n\nSince we have already built the stats functions, I have included the script below and run each function once to check that the class is in working order. Note that everytime a function owned by the Stats() class is called, the program must first call \"self\". This calls the objects itself. We follow self with \n\".function-name\". For example, the mean function must call the total function. It does so with the command \"self.total(listObj)\".\n\nAfter creating stats.py with the Stats class, we will import stats using another python script in the same folder.\n",
"_____no_output_____"
]
],
[
[
"#stats.py\nclass stats():\n def __init__(self):\n print(\"You created an instance of stats()\")\n \n def total(self, list_obj):\n total = 0\n n = len(list_obj)\n for i in range(n):\n total += list_obj[i]\n return total\n \n def mean(self, list_obj):\n n = len(list_obj)\n mean_ = self.total(list_obj) / n\n return mean_ \n \n def median(self, list_obj):\n n = len(list_obj)\n list_obj = sorted(list_obj)\n # lists of even length divided by 2 have a remainder\n if n % 2 != 0:\n # list length is odd\n middle_index = int((n - 1) / 2)\n median_ = list_obj[middle_index]\n else:\n upper_middle_index = int(n / 2)\n lower_middle_index = upper_middle_index - 1\n # pass slice with two middle values to self.mean()\n median_ = self.mean(list_obj[lower_middle_index : upper_middle_index + 1])\n \n return median_\n \n def mode(self, list_obj):\n # use to record value(s) that appear most times\n max_count = 0\n # use to count occurrences of each value in list\n counter_dict = {}\n for value in list_obj:\n # count for each value should start at 0\n counter_dict[value] = 0\n for value in list_obj:\n # add on to the count of the value for each occurrence in list_obj\n counter_dict[value] += 1\n # make a list of the value (not keys) from the dictionary\n count_list = list(counter_dict.values())\n # and find the max value\n max_count = max(count_list)\n # use a generator to make a list of the values (keys) whose number of \n # occurences in the list match max_count\n mode_ = [key for key in counter_dict if counter_dict[key] == max_count]\n\n return mode_\n \n def variance(self, list_obj, sample = False):\n\n # popvar(list) = sum((xi - list_mean)**2) / n for all xi in list\n # save mean value of list\n list_mean = self.mean(list_obj)\n # use n to calculate average of sum squared diffs\n n = len(list_obj)\n # create value we can add squared diffs to\n sum_sq_diff = 0\n for val in list_obj:\n # adds each squared diff to sum_sq_diff\n sum_sq_diff += (val - list_mean) ** 2\n if sample == False:\n # normalize result by dividing by n\n variance_ = sum_sq_diff / n\n else:\n # for samples, normalize by dividing by (n-1)\n variance_ = sum_sq_diff / (n - 1)\n\n return variance_\n \n def SD(self, list_obj, sample = False):\n SD_ = self.variance(list_obj, sample) ** (1/2)\n \n return SD_\n \n def covariance(self, list_obj1, list_obj2, sample = False):\n # determine the mean of each list\n mean1 = self.mean(list_obj1)\n mean2 = self.mean(list_obj2)\n # instantiate a variable holding the value of 0; this will be used to \n # sum the values generated in the for loop below\n cov = 0\n n1 = len(list_obj1)\n n2 = len(list_obj2)\n # check list lengths are equal\n if n1 == n2:\n n = n1\n # sum the product of the differences\n for i in range(n1):\n cov += (list_obj1[i] - mean1) * (list_obj2[i] - mean2)\n if sample == False:\n cov = cov / n\n # account for sample by dividing by one less than number of elements in list\n else:\n cov = cov / (n - 1)\n # return covariance\n return cov\n else:\n print(\"List lengths are not equal\")\n print(\"List1:\", n1)\n print(\"List2:\", n2)\n\n def correlation(self, list_obj1, list_obj2):\n # corr(x,y) = cov(x, y) / (SD(x) * SD(y))\n cov = self.covariance(list_obj1, list_obj2)\n SD1 = self.SD(list_obj1)\n SD2 = self.SD(list_obj2)\n corr = cov / (SD1 * SD2)\n \n return corr\n \n def skewness(self, list_obj, sample = False):\n mean_ = self.mean(list_obj)\n SD_ = self.SD(list_obj, sample)\n skew = 0\n n = len(list_obj)\n for val in list_obj:\n skew += (val - mean_) ** 3\n skew = skew / n if not sample else n * skew / ((n - 1)*(n - 1) * SD_ ** 3)\n\n return skew\n \n def kurtosis(self, list_obj, sample = False):\n mean_ = self.mean(list_obj)\n kurt = 0\n SD_ = self.SD(list_obj, sample)\n n = len(list_obj)\n for x in list_obj:\n kurt += (x - mean_) ** 4\n kurt = kurt / (n * SD_ ** 4) if not sample else n * (n + 1) * kurt / \\\n ((n - 1) * (n - 2) * (SD_ ** 4)) - (3 *(n - 1) ** 2) / ((n - 2) * (n - 3))\n\n return kurt",
"_____no_output_____"
]
],
[
[
"We will import stats.py using a separate script called importStats.py. Once this script is imported, call the class *stats()* and name the instance *stats_lib*.",
"_____no_output_____"
]
],
[
[
"import stats\n\nstats_lib = stats.stats()",
"You created an instance of stats()\n"
],
[
"list1 = [3, 6, 9, 12, 15]\nlist2 = [i ** 2 for i in range(3, 8)]\nprint(\"sum list1 and list2\", stats_lib.total(list1 + list2)) \nprint(\"mean list1 and list2\", stats_lib.mean(list1 + list2)) \nprint(\"median list1 and list2\", stats_lib.median(list1 + list2)) \nprint(\"mode of list1 and list2\", stats_lib.mode(list1 + list2)) \nprint(\"variance of list1 and list2\", stats_lib.variance(list1 + list2)) \nprint(\"standard deviation of list1 and list2\", stats_lib.SD(list1 + list2)) \nprint(\"covariance of list1 and list2 (separate)\", \n stats_lib.covariance(list1, list2)) \nprint(\"correlation of list1 and list2 (separate)\", \n stats_lib.correlation(list1, list2)) \nprint(\"skewness of list1 and list2\", stats_lib.skewness(list1 + list2)) \nprint(\"kurtosis of list1 and list2\", stats_lib.kurtosis(list1 + list2)) ",
"sum list1 and list2 180\nmean list1 and list2 18.0\nmedian list1 and list2 13.5\nmode of list1 and list2 [9]\nvariance of list1 and list2 191.4\nstandard deviation of list1 and list2 13.83473888441701\ncovariance of list1 and list2 (separate) 60.0\ncorrelation of list1 and list2 (separate) 0.9930726528736967\nskewness of list1 and list2 3037.7548520445002\nkurtosis of list1 and list2 3.048466504849597\n"
]
],
[
[
"### Exercises\n1. Create a function that calculates the length of a list without using len() and returns this value. Create a list and pass it to the function to find its length.\n2. Create a function that performs dot multiplication on two vectors (lists) – such that if list1 = [x1,x2,x3] and list2 = [y1,y2,y3], dot_product_list1_list2 = [x1y1, x2y2, x3y3] – and returns this list. Pass two lists of the same length to this function.\n3. In a single line, pass two lists of the same length to the function from question 2 and pass the instance of that function to the function from question 1. What is the length of dot_product_list1_list2?\n4. Create two unique lists using generator functions and pass them to the function created in question 2.\n5. Create a function that checks the types of elements in a list. For example, if a list contains a string, an integer, and a float, this function should return a list that contains identifies these three types: [str, int, float].\n6. In a single line, pass a list with at least 4 different types to the function from question 5 and pass the result to the funciton measuring length. \n7. Create a class that houses each of the functions (now methods) that you have created. Create an instance of that class and use each of the methods from the class.\n\n### Exploration\n1. Visit OOP II: Building Classes lesson from Sargent and Stachurski and duplicate \"Example: A Consumer Class\". Following this, pass different values to the class methods and return the value of agent wealth using *object.\\_\\_dict\\_\\_* write a paragraph explaining the script and the results.\n\n2. Visit OOP II: Building Classes lesson from Sargent and Stachurski and duplicate \"Example: The Solow Growth Model\". Following this, pass different values for each of the parameters and show how the output changes. Write a paragraph explaining the script and your findings.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e734207638c86d6f1fab0ac5996286d6717fc6a7 | 33,470 | ipynb | Jupyter Notebook | data_preprocessing - MVI Sci-Kit.ipynb | swaruptripathy/DataScience-Python | 99fbb2162b47ca44c1d89efdf381b4db2e50c73a | [
"MIT"
] | null | null | null | data_preprocessing - MVI Sci-Kit.ipynb | swaruptripathy/DataScience-Python | 99fbb2162b47ca44c1d89efdf381b4db2e50c73a | [
"MIT"
] | null | null | null | data_preprocessing - MVI Sci-Kit.ipynb | swaruptripathy/DataScience-Python | 99fbb2162b47ca44c1d89efdf381b4db2e50c73a | [
"MIT"
] | null | null | null | 47.610242 | 7,800 | 0.64156 | [
[
[
"## Missing value imputation using ML model",
"_____no_output_____"
]
],
[
[
"#Importing packages\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sb",
"_____no_output_____"
],
[
"dataset = pd.read_excel('/Users/swaruptripathy/Desktop/Data Science and AI/datasets/stark_data.xlsx')",
"_____no_output_____"
],
[
"dataset.head()",
"_____no_output_____"
],
[
"dataset.shape",
"_____no_output_____"
],
[
"#Information about the dataset\ndataset.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6 entries, 0 to 5\nData columns (total 4 columns):\nCharacter 6 non-null object\nAge 5 non-null float64\nGender 6 non-null object\nSurvived 6 non-null int64\ndtypes: float64(1), int64(1), object(2)\nmemory usage: 272.0+ bytes\n"
],
[
"#Check for null values\ndataset.isnull()",
"_____no_output_____"
],
[
"#Plot null values in seaborn\nsb.heatmap(dataset.isnull())",
"_____no_output_____"
],
[
"X = dataset.iloc[:,:-1].values\ny = dataset.iloc[:,3].values\nX[:,1:2]",
"_____no_output_____"
],
[
"#Impute missing value using sklearn imputer from preprocessing\nfrom sklearn.preprocessing import Imputer\nimputer = Imputer(missing_values = 'NaN', strategy = 'mean', axis = 0)\nimputer.fit(X[:, 1:2])\nX[:, 1:2] = imputer.transform(X[:, 1:2]) #imputer.fit_transform()\nX[:, 1:2]",
"/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:58: DeprecationWarning: Class Imputer is deprecated; Imputer was deprecated in version 0.20 and will be removed in 0.22. Import impute.SimpleImputer from sklearn instead.\n warnings.warn(msg, category=DeprecationWarning)\n"
],
[
"X",
"_____no_output_____"
],
[
"help(Imputer)",
"Help on class Imputer in module sklearn.preprocessing.imputation:\n\nclass Imputer(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin)\n | Imputer(*args, **kwargs)\n | \n | Imputation transformer for completing missing values.\n | \n | Read more in the :ref:`User Guide <imputation>`.\n | \n | Parameters\n | ----------\n | missing_values : integer or \"NaN\", optional (default=\"NaN\")\n | The placeholder for the missing values. All occurrences of\n | `missing_values` will be imputed. For missing values encoded as np.nan,\n | use the string value \"NaN\".\n | \n | strategy : string, optional (default=\"mean\")\n | The imputation strategy.\n | \n | - If \"mean\", then replace missing values using the mean along\n | the axis.\n | - If \"median\", then replace missing values using the median along\n | the axis.\n | - If \"most_frequent\", then replace missing using the most frequent\n | value along the axis.\n | \n | axis : integer, optional (default=0)\n | The axis along which to impute.\n | \n | - If `axis=0`, then impute along columns.\n | - If `axis=1`, then impute along rows.\n | \n | verbose : integer, optional (default=0)\n | Controls the verbosity of the imputer.\n | \n | copy : boolean, optional (default=True)\n | If True, a copy of X will be created. If False, imputation will\n | be done in-place whenever possible. Note that, in the following cases,\n | a new copy will always be made, even if `copy=False`:\n | \n | - If X is not an array of floating values;\n | - If X is sparse and `missing_values=0`;\n | - If `axis=0` and X is encoded as a CSR matrix;\n | - If `axis=1` and X is encoded as a CSC matrix.\n | \n | Attributes\n | ----------\n | statistics_ : array of shape (n_features,)\n | The imputation fill value for each feature if axis == 0.\n | \n | Notes\n | -----\n | - When ``axis=0``, columns which only contained missing values at `fit`\n | are discarded upon `transform`.\n | - When ``axis=1``, an exception is raised if there are rows for which it is\n | not possible to fill in the missing values (e.g., because they only\n | contain missing values).\n | \n | Method resolution order:\n | Imputer\n | sklearn.base.BaseEstimator\n | sklearn.base.TransformerMixin\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(*args, **kwargs)\n | DEPRECATED: Imputer was deprecated in version 0.20 and will be removed in 0.22. Import impute.SimpleImputer from sklearn instead.\n | \n | fit(self, X, y=None)\n | Fit the imputer on X.\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix}, shape (n_samples, n_features)\n | Input data, where ``n_samples`` is the number of samples and\n | ``n_features`` is the number of features.\n | \n | Returns\n | -------\n | self : Imputer\n | \n | transform(self, X)\n | Impute all missing values in X.\n | \n | Parameters\n | ----------\n | X : {array-like, sparse matrix}, shape = [n_samples, n_features]\n | The input data to complete.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from sklearn.base.BaseEstimator:\n | \n | __getstate__(self)\n | \n | __repr__(self)\n | Return repr(self).\n | \n | __setstate__(self, state)\n | \n | get_params(self, deep=True)\n | Get parameters for this estimator.\n | \n | Parameters\n | ----------\n | deep : boolean, optional\n | If True, will return the parameters for this estimator and\n | contained subobjects that are estimators.\n | \n | Returns\n | -------\n | params : mapping of string to any\n | Parameter names mapped to their values.\n | \n | set_params(self, **params)\n | Set the parameters of this estimator.\n | \n | The method works on simple estimators as well as on nested objects\n | (such as pipelines). The latter have parameters of the form\n | ``<component>__<parameter>`` so that it's possible to update each\n | component of a nested object.\n | \n | Returns\n | -------\n | self\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from sklearn.base.BaseEstimator:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | ----------------------------------------------------------------------\n | Methods inherited from sklearn.base.TransformerMixin:\n | \n | fit_transform(self, X, y=None, **fit_params)\n | Fit to data, then transform it.\n | \n | Fits transformer to X and y with optional parameters fit_params\n | and returns a transformed version of X.\n | \n | Parameters\n | ----------\n | X : numpy array of shape [n_samples, n_features]\n | Training set.\n | \n | y : numpy array of shape [n_samples]\n | Target values.\n | \n | Returns\n | -------\n | X_new : numpy array of shape [n_samples, n_features_new]\n | Transformed array.\n\n"
],
[
"X_Age = pd.DataFrame(X[1])\nsb.heatmap(X_Age.isnull())",
"_____no_output_____"
],
[
"X_Age",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7342c23287bad37d1769e432540474294c3d54a | 156,444 | ipynb | Jupyter Notebook | 08 - Create a Pipeline.ipynb | changyuanliu/mslearn-dp100 | 8c64d07fed6aa566c2702c9d253ba55f2ce6c698 | [
"MIT"
] | 1 | 2021-03-18T11:17:25.000Z | 2021-03-18T11:17:25.000Z | 08 - Create a Pipeline.ipynb | changyuanliu/mslearn-dp100 | 8c64d07fed6aa566c2702c9d253ba55f2ce6c698 | [
"MIT"
] | null | null | null | 08 - Create a Pipeline.ipynb | changyuanliu/mslearn-dp100 | 8c64d07fed6aa566c2702c9d253ba55f2ce6c698 | [
"MIT"
] | null | null | null | 70.917498 | 9,348 | 0.649785 | [
[
[
"# Create a Pipeline\n\nYou can perform the various steps required to ingest data, train a model, and register the model individually by using the Azure ML SDK to run script-based experiments. However, in an enterprise environment it is common to encapsulate the sequence of discrete steps required to build a machine learning solution into a *pipeline* that can be run on one or more compute targets, either on-demand by a user, from an automated build process, or on a schedule.\n\nIn this notebook, you'll bring together all of these elements to create a simple pipeline that pre-processes data and then trains and registers a model.",
"_____no_output_____"
],
[
"## Connect to your workspace\n\nTo get started, connect to your workspace.\n\n> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.",
"_____no_output_____"
]
],
[
[
"import azureml.core\nfrom azureml.core import Workspace\n\n# Load the workspace from the saved config file\nws = Workspace.from_config()\nprint('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))",
"Ready to use Azure ML 1.22.0 to work with dp100_ml\n"
]
],
[
[
"## Prepare data\n\nIn your pipeline, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if you created it in previously, the code will find the existing version)",
"_____no_output_____"
]
],
[
[
"from azureml.core import Dataset\n\ndefault_ds = ws.get_default_datastore()\n\nif 'diabetes dataset' not in ws.datasets:\n default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data\n target_path='diabetes-data/', # Put it in a folder path in the datastore\n overwrite=True, # Replace existing files of the same name\n show_progress=True)\n\n #Create a tabular dataset from the path on the datastore (this may take a short while)\n tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))\n\n # Register the tabular dataset\n try:\n tab_data_set = tab_data_set.register(workspace=ws, \n name='diabetes dataset',\n description='diabetes data',\n tags = {'format':'CSV'},\n create_new_version=True)\n print('Dataset registered.')\n except Exception as ex:\n print(ex)\nelse:\n print('Dataset already registered.')",
"Dataset already registered.\n"
]
],
[
[
"## Create scripts for pipeline steps\n\nPipelines consist of one or more *steps*, which can be Python scripts, or specialized steps like a data transfer step that copies data from one location to another. Each step can run in its own compute context. In this exercise, you'll build a simple pipeline that contains two Python script steps: one to pre-process some training data, and another to use the pre-processed data to train and register a model.\n\nFirst, let's create a folder for the script files we'll use in the pipeline steps.",
"_____no_output_____"
]
],
[
[
"import os\n# Create a folder for the pipeline step files\nexperiment_folder = 'diabetes_pipeline'\nos.makedirs(experiment_folder, exist_ok=True)\n\nprint(experiment_folder)",
"diabetes_pipeline\n"
]
],
[
[
"Now let's create the first script, which will read data from the diabetes dataset and apply some simple pre-processing to remove any rows with missing data and normalize the numeric features so they're on a similar scale.\n\nThe script includes a argument named **--prepped-data**, which references the folder where the resulting data should be saved.",
"_____no_output_____"
]
],
[
[
"%%writefile $experiment_folder/prep_diabetes.py\n# Import libraries\nimport os\nimport argparse\nimport pandas as pd\nfrom azureml.core import Run\nfrom sklearn.preprocessing import MinMaxScaler\n\n# Get parameters\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--input-data\", type=str, dest='raw_dataset_id', help='raw dataset')\nparser.add_argument('--prepped-data', type=str, dest='prepped_data', default='prepped_data', help='Folder for results')\nargs = parser.parse_args()\nsave_folder = args.prepped_data\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# load the data (passed as an input dataset)\nprint(\"Loading Data...\")\ndiabetes = run.input_datasets['raw_data'].to_pandas_dataframe()\n\n# Log raw row count\nrow_count = (len(diabetes))\nrun.log('raw_rows', row_count)\n\n# remove nulls\ndiabetes = diabetes.dropna()\n\n# Normalize the numeric columns\nscaler = MinMaxScaler()\nnum_cols = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree']\ndiabetes[num_cols] = scaler.fit_transform(diabetes[num_cols])\n\n# Log processed rows\nrow_count = (len(diabetes))\nrun.log('processed_rows', row_count)\n\n# Save the prepped data\nprint(\"Saving Data...\")\nos.makedirs(save_folder, exist_ok=True)\nsave_path = os.path.join(save_folder,'data.csv')\ndiabetes.to_csv(save_path, index=False, header=True)\n\n# End the run\nrun.complete()",
"Writing diabetes_pipeline/prep_diabetes.py\n"
]
],
[
[
"Now you can create the script for the second step, which will train a model. The script includes a argument named **--training-folder**, which references the folder where the prepared data was saved by the previous step.",
"_____no_output_____"
]
],
[
[
"%%writefile $experiment_folder/train_diabetes.py\n# Import libraries\nfrom azureml.core import Run, Model\nimport argparse\nimport pandas as pd\nimport numpy as np\nimport joblib\nimport os\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve\nimport matplotlib.pyplot as plt\n\n# Get parameters\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--training-folder\", type=str, dest='training_folder', help='training data folder')\nargs = parser.parse_args()\ntraining_folder = args.training_folder\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# load the prepared data file in the training folder\nprint(\"Loading Data...\")\nfile_path = os.path.join(training_folder,'data.csv')\ndiabetes = pd.read_csv(file_path)\n\n# Separate features and labels\nX, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values\n\n# Split data into training set and test set\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)\n\n# Train adecision tree model\nprint('Training a decision tree model...')\nmodel = DecisionTreeClassifier().fit(X_train, y_train)\n\n# calculate accuracy\ny_hat = model.predict(X_test)\nacc = np.average(y_hat == y_test)\nprint('Accuracy:', acc)\nrun.log('Accuracy', np.float(acc))\n\n# calculate AUC\ny_scores = model.predict_proba(X_test)\nauc = roc_auc_score(y_test,y_scores[:,1])\nprint('AUC: ' + str(auc))\nrun.log('AUC', np.float(auc))\n\n# plot ROC curve\nfpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])\nfig = plt.figure(figsize=(6, 4))\n# Plot the diagonal 50% line\nplt.plot([0, 1], [0, 1], 'k--')\n# Plot the FPR and TPR achieved by our model\nplt.plot(fpr, tpr)\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curve')\nrun.log_image(name = \"ROC\", plot = fig)\nplt.show()\n\n# Save the trained model in the outputs folder\nprint(\"Saving model...\")\nos.makedirs('outputs', exist_ok=True)\nmodel_file = os.path.join('outputs', 'diabetes_model.pkl')\njoblib.dump(value=model, filename=model_file)\n\n# Register the model\nprint('Registering model...')\nModel.register(workspace=run.experiment.workspace,\n model_path = model_file,\n model_name = 'diabetes_model',\n tags={'Training context':'Pipeline'},\n properties={'AUC': np.float(auc), 'Accuracy': np.float(acc)})\n\n\nrun.complete()",
"Writing diabetes_pipeline/train_diabetes.py\n"
]
],
[
[
"## Prepare a compute environment for the pipeline steps\n\nIn this exercise, you'll use the same compute for both steps, but it's important to realize that each step is run independently; so you could specify different compute contexts for each step if appropriate.\n\nFirst, get the compute target you created in a previous lab (if it doesn't exist, it will be created).\n\n> **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.",
"_____no_output_____"
]
],
[
[
"from azureml.core.compute import ComputeTarget, AmlCompute\nfrom azureml.core.compute_target import ComputeTargetException\n\ncluster_name = \"dp100cluster\"\n\ntry:\n # Check for existing compute target\n pipeline_cluster = ComputeTarget(workspace=ws, name=cluster_name)\n print('Found existing cluster, use it.')\nexcept ComputeTargetException:\n # If it doesn't already exist, create it\n try:\n compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)\n pipeline_cluster = ComputeTarget.create(ws, cluster_name, compute_config)\n pipeline_cluster.wait_for_completion(show_output=True)\n except Exception as ex:\n print(ex)\n ",
"Found existing cluster, use it.\n"
]
],
[
[
"The compute will require a Python environment with the necessary package dependencies installed, so you'll need to create a run configuration.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\nfrom azureml.core.runconfig import RunConfiguration\n\n# Create a Python environment for the experiment\ndiabetes_env = Environment(\"diabetes-pipeline-env\")\ndiabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies\ndiabetes_env.docker.enabled = True # Use a docker container\n\n# Create a set of package dependencies\ndiabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],\n pip_packages=['azureml-defaults','azureml-dataprep[pandas]','pyarrow'])\n\n# Add the dependencies to the environment\ndiabetes_env.python.conda_dependencies = diabetes_packages\n\n# Register the environment \ndiabetes_env.register(workspace=ws)\nregistered_env = Environment.get(ws, 'diabetes-pipeline-env')\n\n# Create a new runconfig object for the pipeline\npipeline_run_config = RunConfiguration()\n\n# Use the compute you created above. \npipeline_run_config.target = pipeline_cluster\n\n# Assign the environment to the run configuration\npipeline_run_config.environment = registered_env\n\nprint (\"Run configuration created.\")",
"Run configuration created.\n"
]
],
[
[
"## Create and run a pipeline\n\nNow you're ready to create and run a pipeline.\n\nFirst you need to define the steps for the pipeline, and any data references that need to passed between them. In this case, the first step must write the prepared data to a folder that can be read from by the second step. Since the steps will be run on remote compute (and in fact, could each be run on different compute), the folder path must be passed as a data reference to a location in a datastore within the workspace. The **PipelineData** object is a special kind of data reference that is used for interim storage locations that can be passed between pipeline steps, so you'll create one and use at as the output for the first step and the input for the second step. Note that you also need to pass it as a script argument so our code can access the datastore location referenced by the data reference.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core import PipelineData\nfrom azureml.pipeline.steps import PythonScriptStep\n\n# Get the training dataset\ndiabetes_ds = ws.datasets.get(\"diabetes dataset\")\n\n# Create a PipelineData (temporary Data Reference) for the model folder\nprepped_data_folder = PipelineData(\"prepped_data_folder\", datastore=ws.get_default_datastore())\n\n# Step 1, Run the data prep script\nprep_step = PythonScriptStep(name = \"Prepare Data\",\n source_directory = experiment_folder,\n script_name = \"prep_diabetes.py\",\n arguments = ['--input-data', diabetes_ds.as_named_input('raw_data'),\n '--prepped-data', prepped_data_folder],\n outputs=[prepped_data_folder],\n compute_target = pipeline_cluster,\n runconfig = pipeline_run_config,\n allow_reuse = True)\n\n# Step 2, run the training script\ntrain_step = PythonScriptStep(name = \"Train and Register Model\",\n source_directory = experiment_folder,\n script_name = \"train_diabetes.py\",\n arguments = ['--training-folder', prepped_data_folder],\n inputs=[prepped_data_folder],\n compute_target = pipeline_cluster,\n runconfig = pipeline_run_config,\n allow_reuse = True)\n\nprint(\"Pipeline steps defined\")",
"Pipeline steps defined\n"
]
],
[
[
"OK, you're ready build the pipeline from the steps you've defined and run it as an experiment.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\nfrom azureml.pipeline.core import Pipeline\nfrom azureml.widgets import RunDetails\n\n# Construct the pipeline\npipeline_steps = [prep_step, train_step]\npipeline = Pipeline(workspace=ws, steps=pipeline_steps)\nprint(\"Pipeline is built.\")\n\n# Create an experiment and run the pipeline\nexperiment = Experiment(workspace=ws, name = 'mslearn-diabetes-pipeline')\npipeline_run = experiment.submit(pipeline, regenerate_outputs=True)\nprint(\"Pipeline submitted for execution.\")\nRunDetails(pipeline_run).show()\npipeline_run.wait_for_completion(show_output=True)",
"Pipeline is built.\nCreated step Prepare Data [275367ac][730eb0c0-98ca-4c8e-8e2f-b78374815d85], (This step will run and generate new outputs)\nCreated step Train and Register Model [2e06e0fa][b0675b0e-f8c3-4d6a-95af-8f069378f3b8], (This step will run and generate new outputs)\nSubmitted PipelineRun 5d9d21f5-7b67-4982-9e11-32111140fb5a\nLink to Azure Machine Learning Portal: https://ml.azure.com/experiments/mslearn-diabetes-pipeline/runs/5d9d21f5-7b67-4982-9e11-32111140fb5a?wsid=/subscriptions/8e2eae19-fb68-43d0-a429-b4d1a6bcf2d1/resourcegroups/dp100/workspaces/dp100_ml\nPipeline submitted for execution.\n"
]
],
[
[
"A graphical representation of the pipeline experiment will be displayed in the widget as it runs. Keep an eye on the kernel indicator at the top right of the page, when it turns from **⚫** to **◯**, the code has finished running. You can also monitor pipeline runs in the **Experiments** page in [Azure Machine Learning studio](https://ml.azure.com).\n\nWhen the pipeline has finished, you can examine the metrics recorded by it's child runs.",
"_____no_output_____"
]
],
[
[
"for run in pipeline_run.get_children():\n print(run.name, ':')\n metrics = run.get_metrics()\n for metric_name in metrics:\n print('\\t',metric_name, \":\", metrics[metric_name])",
"Train and Register Model :\n\t Accuracy : 0.9\n\t AUC : 0.8863896775883228\n\t ROC : aml://artifactId/ExperimentRun/dcid.5aa58156-7e90-44bd-9338-e5dc358380f4/ROC_1615925288.png\nPrepare Data :\n\t raw_rows : 15000\n\t processed_rows : 15000\n"
]
],
[
[
"Assuming the pipeline was successful, a new model should be registered with a *Training context* tag indicating it was trained in a pipeline. Run the following code to verify this.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Model\n\nfor model in Model.list(ws):\n print(model.name, 'version:', model.version)\n for tag_name in model.tags:\n tag = model.tags[tag_name]\n print ('\\t',tag_name, ':', tag)\n for prop_name in model.properties:\n prop = model.properties[prop_name]\n print ('\\t',prop_name, ':', prop)\n print('\\n')",
"diabetes_model version: 7\n\t Training context : Pipeline\n\t AUC : 0.8863896775883228\n\t Accuracy : 0.9\n\n\ndiabetes_model version: 6\n\t Training context : Compute cluster\n\t AUC : 0.8852500572906943\n\t Accuracy : 0.9\n\n\ndiabetes_model version: 5\n\t Training context : Compute cluster\n\t AUC : 0.8852500572906943\n\t Accuracy : 0.9\n\n\ndiabetes_model version: 4\n\t Training context : File dataset\n\t AUC : 0.8568743524381947\n\t Accuracy : 0.7891111111111111\n\n\ndiabetes_model version: 3\n\t Training context : Tabular dataset\n\t AUC : 0.8568509052814499\n\t Accuracy : 0.7891111111111111\n\n\ndiabetes_model version: 2\n\t Training context : Parameterized script\n\t AUC : 0.8484357430717946\n\t Accuracy : 0.774\n\n\ndiabetes_model version: 1\n\t Training context : Script\n\t AUC : 0.8483203144435048\n\t Accuracy : 0.774\n\n\namlstudio-designer-predict-dia version: 2\n\t CreatedByAMLStudio : true\n\n\namlstudio-designer-predict-dia version: 1\n\t CreatedByAMLStudio : true\n\n\nAutoMLafb0d63c21 version: 1\n\n\n"
]
],
[
[
"## Publish the pipeline\n\nAfter you've created and tested a pipeline, you can publish it as a REST service.",
"_____no_output_____"
]
],
[
[
"# Publish the pipeline from the run\npublished_pipeline = pipeline_run.publish_pipeline(\n name=\"diabetes-training-pipeline\", description=\"Trains diabetes model\", version=\"1.0\")\n\npublished_pipeline",
"_____no_output_____"
]
],
[
[
"Note that the published pipeline has an endpoint, which you can see in the **Endpoints** page (on the **Pipeline Endpoints** tab) in [Azure Machine Learning studio](https://ml.azure.com). You can also find its URI as a property of the published pipeline object:",
"_____no_output_____"
]
],
[
[
"rest_endpoint = published_pipeline.endpoint\nprint(rest_endpoint)",
"https://northcentralus.api.azureml.ms/pipelines/v1.0/subscriptions/8e2eae19-fb68-43d0-a429-b4d1a6bcf2d1/resourceGroups/dp100/providers/Microsoft.MachineLearningServices/workspaces/dp100_ml/PipelineRuns/PipelineSubmit/4fbd8be4-a138-4eb4-8642-f29ba99b5dda\n"
]
],
[
[
"## Call the pipeline endpoint\n\nTo use the endpoint, client applications need to make a REST call over HTTP. This request must be authenticated, so an authorization header is required. A real application would require a service principal with which to be authenticated, but to test this out, we'll use the authorization header from your current connection to your Azure workspace, which you can get using the following code:",
"_____no_output_____"
]
],
[
[
"from azureml.core.authentication import InteractiveLoginAuthentication\n\ninteractive_auth = InteractiveLoginAuthentication()\nauth_header = interactive_auth.get_authentication_header()\nprint(\"Authentication header ready.\")",
"Authentication header ready.\n"
]
],
[
[
"Now we're ready to call the REST interface. The pipeline runs asynchronously, so we'll get an identifier back, which we can use to track the pipeline experiment as it runs:",
"_____no_output_____"
]
],
[
[
"import requests\n\nexperiment_name = 'mslearn-diabetes-pipeline'\n\nrest_endpoint = published_pipeline.endpoint\nresponse = requests.post(rest_endpoint, \n headers=auth_header, \n json={\"ExperimentName\": experiment_name})\nrun_id = response.json()[\"Id\"]\nrun_id",
"_____no_output_____"
]
],
[
[
"Since you have the run ID, you can use it to wait for the run to complete.\n\n> **Note**: The pipeline should complete quickly, because each step was configured to allow output reuse. This was done primarily for convenience and to save time in this course. In reality, you'd likely want the first step to run every time in case the data has changed, and trigger the subsequent steps only if the output from step one changes.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core.run import PipelineRun\n\npublished_pipeline_run = PipelineRun(ws.experiments[experiment_name], run_id)\npublished_pipeline_run.wait_for_completion(show_output=True)",
"PipelineRunId: 6a5c86f3-5882-44c2-b2b8-ed8770beb271\nLink to Azure Machine Learning Portal: https://ml.azure.com/experiments/mslearn-diabetes-pipeline/runs/6a5c86f3-5882-44c2-b2b8-ed8770beb271?wsid=/subscriptions/8e2eae19-fb68-43d0-a429-b4d1a6bcf2d1/resourcegroups/dp100/workspaces/dp100_ml\nPipelineRun Status: Running\n\n\nStepRunId: 17434de9-8a2b-4fa3-a330-ab762b774434\nLink to Azure Machine Learning Portal: https://ml.azure.com/experiments/mslearn-diabetes-pipeline/runs/17434de9-8a2b-4fa3-a330-ab762b774434?wsid=/subscriptions/8e2eae19-fb68-43d0-a429-b4d1a6bcf2d1/resourcegroups/dp100/workspaces/dp100_ml\n\nStepRun(Prepare Data) Execution Summary\n========================================\nStepRun( Prepare Data ) Status: Finished\n{'runId': '17434de9-8a2b-4fa3-a330-ab762b774434', 'target': 'dp100cluster', 'status': 'Completed', 'startTimeUtc': '2021-03-16T20:08:51.951148Z', 'endTimeUtc': '2021-03-16T20:08:52.229322Z', 'properties': {'azureml.reusedrunid': '62c252a6-14a8-4376-9aec-afd9adc654eb', 'azureml.reusednodeid': '275367ac', 'azureml.reusedpipeline': '5d9d21f5-7b67-4982-9e11-32111140fb5a', 'azureml.reusedpipelinerunid': '5d9d21f5-7b67-4982-9e11-32111140fb5a', 'azureml.runsource': 'azureml.StepRun', 'azureml.nodeid': '275367ac', 'ContentSnapshotId': 'ee0fd92b-7a0b-4d1c-af14-4125dffd3c13', 'StepType': 'PythonScriptStep', 'ComputeTargetType': 'AmlCompute', 'azureml.moduleid': '730eb0c0-98ca-4c8e-8e2f-b78374815d85', 'azureml.pipelinerunid': '6a5c86f3-5882-44c2-b2b8-ed8770beb271', 'azureml.pipelineid': '4fbd8be4-a138-4eb4-8642-f29ba99b5dda', '_azureml.ComputeTargetType': 'amlcompute', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json'}, 'inputDatasets': [], 'outputDatasets': [], 'runDefinition': {'script': 'prep_diabetes.py', 'command': '', 'useAbsolutePath': False, 'arguments': ['--input-data', 'DatasetConsumptionConfig:raw_data', '--prepped-data', '$AZUREML_DATAREFERENCE_prepped_data_folder'], 'sourceDirectoryDataStore': None, 'framework': 'Python', 'communicator': 'None', 'target': 'dp100cluster', 'dataReferences': {'prepped_data_folder': {'dataStoreName': 'workspaceblobstore', 'mode': 'Mount', 'pathOnDataStore': 'azureml/62c252a6-14a8-4376-9aec-afd9adc654eb/prepped_data_folder', 'pathOnCompute': None, 'overwrite': False}}, 'data': {'raw_data': {'dataLocation': {'dataset': {'id': 'fe365338-d31f-4dbd-9afa-7d82ea9c07c3', 'name': None, 'version': '2'}, 'dataPath': None}, 'mechanism': 'Direct', 'environmentVariableName': 'raw_data', 'pathOnCompute': None, 'overwrite': False}}, 'outputData': {}, 'jobName': None, 'maxRunDurationSeconds': None, 'nodeCount': 1, 'priority': None, 'credentialPassthrough': False, 'identity': None, 'environment': {'name': 'diabetes-pipeline-env', 'version': '1', 'python': {'interpreterPath': 'python', 'userManagedDependencies': False, 'condaDependencies': {'channels': ['anaconda', 'conda-forge'], 'dependencies': ['python=3.6.2', {'pip': ['azureml-defaults~=1.22.0', 'azureml-dataprep[pandas]', 'pyarrow']}, 'scikit-learn', 'ipykernel', 'matplotlib', 'pandas', 'pip'], 'name': 'azureml_47db242d52dc78151f1bb8d6d04d435d'}, 'baseCondaEnvironment': None}, 'environmentVariables': {'EXAMPLE_ENV_VAR': 'EXAMPLE_VALUE'}, 'docker': {'baseImage': 'mcr.microsoft.com/azureml/intelmpi2018.3-ubuntu16.04:20210104.v1', 'platform': {'os': 'Linux', 'architecture': 'amd64'}, 'baseDockerfile': None, 'baseImageRegistry': {'address': None, 'username': None, 'password': None}, 'enabled': True, 'arguments': []}, 'spark': {'repositories': [], 'packages': [], 'precachePackages': True}, 'inferencingStackVersion': None}, 'history': {'outputCollection': True, 'directoriesToWatch': ['logs'], 'enableMLflowTracking': True, 'snapshotProject': True}, 'spark': {'configuration': {'spark.app.name': 'Azure ML Experiment', 'spark.yarn.maxAppAttempts': '1'}}, 'parallelTask': {'maxRetriesPerWorker': 0, 'workerCountPerNode': 1, 'terminalExitCodes': None, 'configuration': {}}, 'amlCompute': {'name': None, 'vmSize': None, 'retainCluster': False, 'clusterMaxNodeCount': 1}, 'aiSuperComputer': {'instanceType': None, 'imageVersion': None, 'location': None, 'aiSuperComputerStorageData': None, 'interactive': False, 'scalePolicy': None, 'virtualClusterArmId': None}, 'tensorflow': {'workerCount': 1, 'parameterServerCount': 1}, 'mpi': {'processCountPerNode': 1}, 'pyTorch': {'communicationBackend': 'nccl', 'processCount': None}, 'hdi': {'yarnDeployMode': 'Cluster'}, 'containerInstance': {'region': None, 'cpuCores': 2.0, 'memoryGb': 3.5}, 'exposedPorts': None, 'docker': {'useDocker': True, 'sharedVolumes': True, 'shmSize': '2g', 'arguments': []}, 'cmk8sCompute': {'configuration': {}}, 'commandReturnCodeConfig': {'returnCode': 'Zero', 'successfulReturnCodes': []}, 'environmentVariables': {}}, 'logFiles': {'azureml-logs/20_image_build_log.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/20_image_build_log.txt?sv=2019-02-02&sr=b&sig=UEcjgyoGkZKaeZC9Py52o1wkXebQslc4TSZmYdTZQyA%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'azureml-logs/55_azureml-execution-tvmps_4abfcb408860262effbb7b1d71d113e9cdf7d2f7e24bc03e87bbc6b1290f902e_d.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/55_azureml-execution-tvmps_4abfcb408860262effbb7b1d71d113e9cdf7d2f7e24bc03e87bbc6b1290f902e_d.txt?sv=2019-02-02&sr=b&sig=n91kXCdQxA2YWD671kZgnPpggKOcEeh2RFMbUyU6%2Bhg%3D&st=2021-03-16T19%3A57%3A29Z&se=2021-03-17T04%3A07%3A29Z&sp=r', 'azureml-logs/65_job_prep-tvmps_4abfcb408860262effbb7b1d71d113e9cdf7d2f7e24bc03e87bbc6b1290f902e_d.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/65_job_prep-tvmps_4abfcb408860262effbb7b1d71d113e9cdf7d2f7e24bc03e87bbc6b1290f902e_d.txt?sv=2019-02-02&sr=b&sig=EP9dfjzct7osiT3k1BtvInebDdckTbRb3ogKmv3XMps%3D&st=2021-03-16T19%3A57%3A29Z&se=2021-03-17T04%3A07%3A29Z&sp=r', 'azureml-logs/70_driver_log.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/70_driver_log.txt?sv=2019-02-02&sr=b&sig=chxWW0%2F%2BK78hYFascpRaNapnXzDrEr%2F44mqxE9qduZk%3D&st=2021-03-16T19%3A57%3A29Z&se=2021-03-17T04%3A07%3A29Z&sp=r', 'azureml-logs/75_job_post-tvmps_4abfcb408860262effbb7b1d71d113e9cdf7d2f7e24bc03e87bbc6b1290f902e_d.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/75_job_post-tvmps_4abfcb408860262effbb7b1d71d113e9cdf7d2f7e24bc03e87bbc6b1290f902e_d.txt?sv=2019-02-02&sr=b&sig=ibfK6Y5wnNc3hDHt%2FK8neYZBYyorIpI5%2BEN5KYDykmU%3D&st=2021-03-16T19%3A57%3A29Z&se=2021-03-17T04%3A07%3A29Z&sp=r', 'azureml-logs/process_info.json': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/process_info.json?sv=2019-02-02&sr=b&sig=jSx4PDMOL6OEkRSuuLYWDMjSHQ7QRWtgOAVunT%2B8cwo%3D&st=2021-03-16T19%3A57%3A29Z&se=2021-03-17T04%3A07%3A29Z&sp=r', 'azureml-logs/process_status.json': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/azureml-logs/process_status.json?sv=2019-02-02&sr=b&sig=n0UbAiuLyK0KR4D0h9uJandcLYnRzb1lcLKY1RSY%2FwA%3D&st=2021-03-16T19%3A57%3A29Z&se=2021-03-17T04%3A07%3A29Z&sp=r', 'logs/azureml/119_azureml.log': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/119_azureml.log?sv=2019-02-02&sr=b&sig=BMAcbrU4dhIfQ6YpUvUHvGChyR13QR%2F3nMA29%2BF8MqA%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/dataprep/backgroundProcess.log': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/dataprep/backgroundProcess.log?sv=2019-02-02&sr=b&sig=REpiD69pp%2FUYA9ENzm5DTgaPpgb5ApuST32MCYTc6qo%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/dataprep/backgroundProcess_Telemetry.log': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/dataprep/backgroundProcess_Telemetry.log?sv=2019-02-02&sr=b&sig=Deu9DrOVdIUzMBlWhi9m8zPwDkvtCheDQpYMjAHdY0s%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/executionlogs.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/executionlogs.txt?sv=2019-02-02&sr=b&sig=j9B2nke2m2z0fm%2BrC43AQKen%2B7R1Z24jaAF2WITLzy4%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/job_prep_azureml.log': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/job_prep_azureml.log?sv=2019-02-02&sr=b&sig=93cJa%2FQ18ZKEMroUAvqG96BVeHOWkanbIkEYZ3SwgGw%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/job_release_azureml.log': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/job_release_azureml.log?sv=2019-02-02&sr=b&sig=8vA9sFnHwQDPwPXjS%2BiyN3f6FCRDhvTOhXcTyXCfBxQ%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/stderrlogs.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/stderrlogs.txt?sv=2019-02-02&sr=b&sig=8N4eE5oJwYS6ETsbojjzM0URtB8939wEEzvWV2BpUlM%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r', 'logs/azureml/stdoutlogs.txt': 'https://dp100ml3144169357.blob.core.windows.net/azureml/ExperimentRun/dcid.62c252a6-14a8-4376-9aec-afd9adc654eb/logs/azureml/stdoutlogs.txt?sv=2019-02-02&sr=b&sig=WlXtk1EYQctfTpDpJFvnpG1%2FR8vAL6mTHwALZiERqvE%3D&st=2021-03-16T19%3A57%3A28Z&se=2021-03-17T04%3A07%3A28Z&sp=r'}, 'submittedBy': 'ChangYuan Liu'}\n\n"
]
],
[
[
"## Schedule the Pipeline\n\nSuppose the clinic for the diabetes patients collects new data each week, and adds it to the dataset. You could run the pipeline every week to retrain the model with the new data.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core import ScheduleRecurrence, Schedule\n\n# Submit the Pipeline every Monday at 00:00 UTC\nrecurrence = ScheduleRecurrence(frequency=\"Week\", interval=1, week_days=[\"Monday\"], time_of_day=\"00:00\")\nweekly_schedule = Schedule.create(ws, name=\"weekly-diabetes-training\", \n description=\"Based on time\",\n pipeline_id=published_pipeline.id, \n experiment_name='mslearn-diabetes-pipeline', \n recurrence=recurrence)\nprint('Pipeline scheduled.')",
"Pipeline scheduled.\n"
]
],
[
[
"You can retrieve the schedules that are defined in the workspace like this:",
"_____no_output_____"
]
],
[
[
"schedules = Schedule.list(ws)\nschedules",
"_____no_output_____"
]
],
[
[
"You can check the latest run like this:",
"_____no_output_____"
]
],
[
[
"pipeline_experiment = ws.experiments.get('mslearn-diabetes-pipeline')\nlatest_run = list(pipeline_experiment.get_runs())[0]\n\nlatest_run.get_details()",
"_____no_output_____"
]
],
[
[
"This is a simple example, designed to demonstrate the principle. In reality, you could build more sophisticated logic into the pipeline steps - for example, evaluating the model against some test data to calculate a performance metric like AUC or accuracy, comparing the metric to that of any previously registered versions of the model, and only registering the new model if it performs better.\n\nYou can use the [Azure Machine Learning extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) to combine Azure ML pipelines with Azure DevOps pipelines (yes, it *is* confusing that they have the same name!) and integrate model retraining into a *continuous integration/continuous deployment (CI/CD)* process. For example you could use an Azure DevOps *build* pipeline to trigger an Azure ML pipeline that trains and registers a model, and when the model is registered it could trigger an Azure Devops *release* pipeline that deploys the model as a web service, along with the application or service that consumes the model.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e734467a278df08374b7140d28e97ac94d13c93e | 6,879 | ipynb | Jupyter Notebook | docs/examples/Example_Aggregations.ipynb | pythonpanda2/forge | 8ed5925cbde8b95188409f9404d0fad4bf07791c | [
"Apache-2.0"
] | 30 | 2017-08-23T02:54:11.000Z | 2022-03-13T22:26:30.000Z | docs/examples/Example_Aggregations.ipynb | pythonpanda2/forge | 8ed5925cbde8b95188409f9404d0fad4bf07791c | [
"Apache-2.0"
] | 26 | 2017-08-09T01:32:01.000Z | 2021-10-11T04:38:31.000Z | docs/examples/Example_Aggregations.ipynb | pythonpanda2/forge | 8ed5925cbde8b95188409f9404d0fad4bf07791c | [
"Apache-2.0"
] | 10 | 2017-09-14T18:16:39.000Z | 2020-11-24T18:09:49.000Z | 24.923913 | 192 | 0.409071 | [
[
[
"# Example Aggregations",
"_____no_output_____"
],
[
"## Aggregating data with MDF",
"_____no_output_____"
],
[
"Searches using `Forge.search()` are limited to 10,000 results. However, there are two methods to circumvent this restriction: `Forge.aggregate_source()` and `Forge.aggregate()`.",
"_____no_output_____"
]
],
[
[
"import json\nfrom mdf_forge.forge import Forge",
"_____no_output_____"
],
[
"mdf = Forge()",
"_____no_output_____"
]
],
[
[
"### aggregate_source - NIST XPS DB\nExample: We want to collect all records from the NIST XPS Database and analyze the binding energies. This database has almost 30,000 records, so we have to use `aggregate()`.",
"_____no_output_____"
]
],
[
[
"# First, let's aggregate all the nist_xps_db data.\nall_entries = mdf.aggregate_sources(\"nist_xps_db\")\nprint(len(all_entries))",
"29190\n"
],
[
"# Now, let's parse out the enery_uncertainty_ev and print the results for analysis.\nuncertainties = {}\nfor record in all_entries:\n if record[\"mdf\"][\"resource_type\"] == \"record\":\n unc = record.get(\"nist_xps_db_v1\", {}).get(\"energy_uncertainty_ev\", 0)\n if not uncertainties.get(unc):\n uncertainties[unc] = 1\n else:\n uncertainties[unc] += 1\nprint(json.dumps(uncertainties, sort_keys=True, indent=4, separators=(',', ': ')))",
"{\n \"0\": 29189\n}\n"
]
],
[
[
"### aggregate - Multiple Datasets\nExample: We want to analyze how often elements are studied with Gallium (Ga), and what the most frequent elemental pairing is. There are more than 10,000 records containing Gallium data.",
"_____no_output_____"
]
],
[
[
"# First, let's aggregate everything that has \"Ga\" in the list of elements.\nall_results = mdf.aggregate(\"material.elements:Ga\")\nprint(len(all_results))",
"18232\n"
],
[
"# Now, let's parse out the other elements in each record and keep a running tally to print out.\nelements = {}\nfor record in all_results:\n if record[\"mdf\"][\"resource_type\"] == \"record\":\n elems = record[\"material\"][\"elements\"]\n for elem in elems:\n if elem in elements.keys():\n elements[elem] += 1\n else:\n elements[elem] = 1\nprint(json.dumps(elements, sort_keys=True, indent=4, separators=(',', ': ')))",
"{\n \"Ac\": 267,\n \"Ag\": 323,\n \"Al\": 322,\n \"Ar\": 2,\n \"As\": 872,\n \"Au\": 372,\n \"B\": 301,\n \"Ba\": 342,\n \"Be\": 281,\n \"Bi\": 4172,\n \"Br\": 38,\n \"C\": 87,\n \"Ca\": 370,\n \"Cd\": 174,\n \"Ce\": 325,\n \"Cl\": 57,\n \"Co\": 381,\n \"Cr\": 315,\n \"Cs\": 160,\n \"Cu\": 403,\n \"Dy\": 317,\n \"Er\": 321,\n \"Eu\": 304,\n \"F\": 84,\n \"Fe\": 2989,\n \"Ga\": 18232,\n \"Gd\": 156,\n \"Ge\": 333,\n \"H\": 159,\n \"Hf\": 310,\n \"Hg\": 282,\n \"Ho\": 323,\n \"I\": 41,\n \"In\": 364,\n \"Ir\": 305,\n \"K\": 313,\n \"La\": 312,\n \"Li\": 469,\n \"Lu\": 291,\n \"Mg\": 683,\n \"Mn\": 4357,\n \"Mo\": 437,\n \"N\": 137,\n \"Na\": 339,\n \"Nb\": 296,\n \"Nd\": 179,\n \"Ni\": 363,\n \"Np\": 252,\n \"O\": 1390,\n \"On\": 6,\n \"Os\": 288,\n \"Ox\": 39,\n \"P\": 153,\n \"Pa\": 272,\n \"Pb\": 278,\n \"Pd\": 361,\n \"Pm\": 273,\n \"Pr\": 312,\n \"Pt\": 338,\n \"Pu\": 280,\n \"Rb\": 163,\n \"Re\": 134,\n \"Rh\": 320,\n \"Ru\": 304,\n \"S\": 161,\n \"Sb\": 327,\n \"Sc\": 331,\n \"Se\": 138,\n \"Si\": 412,\n \"Sm\": 330,\n \"Sn\": 303,\n \"Sr\": 221,\n \"Ta\": 160,\n \"Tb\": 174,\n \"Tc\": 139,\n \"Te\": 361,\n \"Th\": 287,\n \"Ti\": 211,\n \"Tl\": 295,\n \"Tm\": 312,\n \"U\": 223,\n \"V\": 1646,\n \"Va\": 2,\n \"W\": 259,\n \"Xe\": 1,\n \"Y\": 332,\n \"Yb\": 324,\n \"Zn\": 315,\n \"Zr\": 167\n}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e73462984bb530af8c977105269a73c4faadaf2a | 26,808 | ipynb | Jupyter Notebook | Untitled.ipynb | vinicius-17/House-Prediction | d95048aef41d5d04b8b86c0f1f573e76fac61888 | [
"MIT"
] | null | null | null | Untitled.ipynb | vinicius-17/House-Prediction | d95048aef41d5d04b8b86c0f1f573e76fac61888 | [
"MIT"
] | null | null | null | Untitled.ipynb | vinicius-17/House-Prediction | d95048aef41d5d04b8b86c0f1f573e76fac61888 | [
"MIT"
] | null | null | null | 35.227332 | 106 | 0.321546 | [
[
[
"#Data Analysis Phase\n#Importing Data\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns",
"_____no_output_____"
],
[
"pd.pandas.set_option('display.max_columns', None)",
"_____no_output_____"
],
[
"dataset = pd.read_csv('train.csv')",
"_____no_output_____"
],
[
"print(dataset.shape)",
"(1460, 81)\n"
],
[
"dataset.head()",
"_____no_output_____"
],
[
"#Here we will check the percentage of nan values present in each feature\n\n#1 -step make the list of features which has missing values\nfeatures_with_na=[features for features in dataset.columns if dataset[features].isnull().sum()>1]\n\n#2- step print the feature name and the percentage of missing values\nfor feature in features_with_na:\n print(feature, np.round(dataset[feature].isnull().mean(), 4), ' % missing values')",
"LotFrontage 0.1774 % missing values\nAlley 0.9377 % missing values\nMasVnrType 0.0055 % missing values\nMasVnrArea 0.0055 % missing values\nBsmtQual 0.0253 % missing values\nBsmtCond 0.0253 % missing values\nBsmtExposure 0.026 % missing values\nBsmtFinType1 0.0253 % missing values\nBsmtFinType2 0.026 % missing values\nFireplaceQu 0.4726 % missing values\nGarageType 0.0555 % missing values\nGarageYrBlt 0.0555 % missing values\nGarageFinish 0.0555 % missing values\nGarageQual 0.0555 % missing values\nGarageCond 0.0555 % missing values\nPoolQC 0.9952 % missing values\nFence 0.8075 % missing values\nMiscFeature 0.963 % missing values\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e73464553501e745c097beb4f440bf8969d99ee9 | 383,195 | ipynb | Jupyter Notebook | Notebooks/.ipynb_checkpoints/Housing prices-checkpoint.ipynb | pristdata/prisdata.github.io | 6873f2a53cae78c952fe1015a07ac5b767aa07f5 | [
"Apache-2.0"
] | 1 | 2021-03-30T16:57:35.000Z | 2021-03-30T16:57:35.000Z | Notebooks/.ipynb_checkpoints/Housing prices-checkpoint.ipynb | pristdata/pristdata.github.io | 6873f2a53cae78c952fe1015a07ac5b767aa07f5 | [
"Apache-2.0"
] | null | null | null | Notebooks/.ipynb_checkpoints/Housing prices-checkpoint.ipynb | pristdata/pristdata.github.io | 6873f2a53cae78c952fe1015a07ac5b767aa07f5 | [
"Apache-2.0"
] | null | null | null | 391.414709 | 112,340 | 0.929884 | [
[
[
"# Housing market predictions\n\nThe real estate markets present an interesting opportunity for data scientists to analyze and predict the behaviour and trends of property prices.\n\nIn this project I focus on implementing a few advanced regression models to predict housing prices based on various property and location characteristics (publicly available dataset from Kaggle).",
"_____no_output_____"
],
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#-1.-Data-exploration-and-cleaning-\" data-toc-modified-id=\"-1.-Data-exploration-and-cleaning--1\"><span style=\"color: steelblue\"> 1. Data exploration and cleaning </span></a></span></li><li><span><a href=\"#-2.-Data-visualization-\" data-toc-modified-id=\"-2.-Data-visualization--2\"><span style=\"color: steelblue\"> 2. Data visualization </span></a></span></li><li><span><a href=\"#-3.-Data-preparation-\" data-toc-modified-id=\"-3.-Data-preparation--3\"><span style=\"color: steelblue\"> 3. Data preparation </span></a></span></li><li><span><a href=\"#-4.-Model-fitting-\" data-toc-modified-id=\"-4.-Model-fitting--4\"><span style=\"color: steelblue\"> 4. Model fitting </span></a></span></li><li><span><a href=\"#-5.-Discussion-and-conclusions-\" data-toc-modified-id=\"-5.-Discussion-and-conclusions--5\"><span style=\"color: steelblue\"> 5. Discussion and conclusions </span></a></span></li></ul></div>",
"_____no_output_____"
]
],
[
[
"import os\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.linear_model import Ridge \nfrom sklearn.linear_model import Lasso \nfrom sklearn.linear_model import BayesianRidge \nfrom sklearn.ensemble import GradientBoostingRegressor\nimport xgboost as xgb\nfrom xgboost import XGBRegressor\nfrom xgboost import plot_importance\nfrom sklearn.metrics import r2_score as r2 \nfrom tabulate import tabulate\nfrom sklearn.model_selection import GridSearchCV\nfrom numpy import arange\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## <span style='color:steelblue'> 1. Data exploration and cleaning </span>",
"_____no_output_____"
]
],
[
[
"housing = pd.read_csv(\"data.csv\")",
"_____no_output_____"
],
[
"housing.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4600 entries, 0 to 4599\nData columns (total 18 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 date 4600 non-null object \n 1 price 4600 non-null float64\n 2 bedrooms 4600 non-null float64\n 3 bathrooms 4600 non-null float64\n 4 sqft_living 4600 non-null int64 \n 5 sqft_lot 4600 non-null int64 \n 6 floors 4600 non-null float64\n 7 waterfront 4600 non-null int64 \n 8 view 4600 non-null int64 \n 9 condition 4600 non-null int64 \n 10 sqft_above 4600 non-null int64 \n 11 sqft_basement 4600 non-null int64 \n 12 yr_built 4600 non-null int64 \n 13 yr_renovated 4600 non-null int64 \n 14 street 4600 non-null object \n 15 city 4600 non-null object \n 16 statezip 4600 non-null object \n 17 country 4600 non-null object \ndtypes: float64(4), int64(9), object(5)\nmemory usage: 647.0+ KB\n"
],
[
"housing.head()",
"_____no_output_____"
],
[
"print(housing.isnull().sum())",
"date 0\nprice 0\nbedrooms 0\nbathrooms 0\nsqft_living 0\nsqft_lot 0\nfloors 0\nwaterfront 0\nview 0\ncondition 0\nsqft_above 0\nsqft_basement 0\nyr_built 0\nyr_renovated 0\nstreet 0\ncity 0\nstatezip 0\ncountry 0\ndtype: int64\n"
]
],
[
[
"The data set consists of 4600 rows of 18 columns comprising various property characteristics related to their size, room and distribution attributes and city (from the state of Washington, U.S.). There are no null values. ",
"_____no_output_____"
],
[
"Since the location related columns (except 'city' and 'street') and the 'date' column are homogenous and thus irrelevant for analysis they were eliminated using the code below. Also the 'price' column was rounded.",
"_____no_output_____"
]
],
[
[
"# Unnecesary columns elimination\n\nhousing = housing.loc[:, housing.columns != 'country']\nhousing = housing.loc[:, housing.columns != 'date']",
"_____no_output_____"
],
[
"# Round the price column\n\nhousing['price'] = housing['price'].round(decimals=2)",
"_____no_output_____"
]
],
[
[
"## <span style='color:steelblue'> 2. Data visualization </span>",
"_____no_output_____"
],
[
"The price distribution plot below shows that most houses are in the range of 250 thousand dollars to one million, with a median of around 460 thousand dollars.",
"_____no_output_____"
]
],
[
[
"round(housing.price.median())",
"_____no_output_____"
],
[
"# Housing prices distribution\n\nplt.figure(figsize = (11,6))\n\nsns.histplot(housing['price'], color = \"c\", kde=True)\nplt.xlabel('Price in millions', fontsize = 14)\nplt.ylabel('Frequency', fontsize = 14)\nplt.xticks(fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.xlim(0, 2500000)\n\nplt.show()",
"_____no_output_____"
],
[
"housing.city.value_counts()[:10] # top housing market cities ",
"_____no_output_____"
],
[
"# Creating data frame for violin plot \ncities = ('Seattle', 'Renton', 'Bellevue', \n 'Redmond', 'Issaquah', 'Kirkland', \n 'Kent', 'Auburn', 'Sammamish', 'Federal Way')\nhousing.city.isin(cities)\ntop_cities = housing[housing.city.isin(cities)]",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 6))\n\nax = sns.violinplot(data=top_cities, x='city', y='price')\nax.set_ylim(bottom=0, top=1500000)\nax.set_xlabel(\"City\", fontsize = 12)\nax.set_ylabel(\"Price\", fontsize = 12)\nplt.xticks(fontsize = 11)\nplt.yticks(fontsize = 12)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
" \n\nThe plot above shows that there is considerable price viariability by city. This may be true also for different streets and postal codes so I decided to leave those object variables for analysis (later enconded/transformed into dummy variables). \n\n ",
"_____no_output_____"
]
],
[
[
"# Scatterplots of housing rooms, floors and condition\n\nplt.style.use('default')\nfig, ax = plt.subplots(2, 2, figsize = (8, 6)) \n\nhousing.plot(kind='scatter', x='bedrooms', y='price', color='mediumaquamarine', alpha=0.3, ylim=(0,2500000), ax=ax[0,0])\nhousing.plot(kind='scatter', x='bathrooms', y='price', color='steelblue', alpha=0.3, ylim=(0,2500000), ax=ax[0,1])\nhousing.plot(kind='scatter', x='floors', y='price', color='teal', alpha=0.3, ylim=(0,2500000), ax=ax[1,0])\nhousing.plot(kind='scatter', x='condition', y='price', color='skyblue', alpha=0.3, ylim=(0,2500000), ax=ax[1,1])\n\nax[0,0].set_xlabel('Bedrooms', size=12, color='maroon')\nax[0,1].set_xlabel('Bathrooms', size=12, color='maroon')\nax[1,0].set_xlabel('Floors', size=12, color='maroon')\nax[1,1].set_xlabel('Condition', size=12, color='maroon')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
" \n\nAs observed in the plots above, the house condition, number of bathrooms and number of bedrooms in general show a positive correlation with price. The relation between number of floors and the house price is not entirely clear.\n\n ",
"_____no_output_____"
]
],
[
[
"# Scatterplots of house square feet variables (size)\nplt.style.use('seaborn-dark')\nfig, ax = plt.subplots(2, 2, figsize = (8, 6)) \n\nhousing.plot(kind='scatter', x='sqft_living', y='price', color='blue', alpha=0.2, ylim=(0,3000000), ax=ax[0,0])\nhousing.plot(kind='scatter', x='sqft_lot', y='price', color='darkmagenta', alpha=0.2, xlim=(0,250000), ylim=(0,3000000), ax=ax[0,1])\nhousing.plot(kind='scatter', x='sqft_above', y='price', color='mediumvioletred', alpha=0.2, ylim=(0,3000000), ax=ax[1,0])\nhousing.plot(kind='scatter', x='sqft_basement', y='price', color='darkgreen', alpha=0.2, ylim=(0,3000000), ax=ax[1,1])\n\nax[0,0].set_xlabel('Square ft living', size=12, color='maroon')\nax[0,1].set_xlabel('Square ft lot', size=12, color='maroon')\nax[0,1].set_xticks((0,250000, 50000, 100000, 150000,200000))\nax[1,0].set_xlabel('Square ft above', size=12, color='maroon')\nax[1,1].set_xlabel('Square ft basement', size=12, color='maroon')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
" \n\nThe scatterplots above show that all the variables related to the houses size show somewhat of a positive linear relationship with price. They may be amongst the most important features for price prediction. This will later be elucidated by plotting model's feature importance. \n\n ",
"_____no_output_____"
]
],
[
[
"# Correlation heatmap\n\ncorr_matrix = housing.corr()\nplt.figure(figsize=(10, 8))\nsns.heatmap(corr_matrix, vmax=1, cmap=\"twilight_shifted\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
" \n\nThere seems to be few pairs of highly correlated numeric variables, however, they are expected since many characteristics are related to the size and condition of the house so I decided to not remove them.\n",
"_____no_output_____"
],
[
"## <span style='color:steelblue'> 3. Data preparation </span>",
"_____no_output_____"
],
[
"All the necessary preprocessing steps for machine learning were followed below. The categorical features were encoded in order to optimize modeling. A matrix of features and dependent variable vector were created. The dataset was split into train and test sets and feature scaling was performed.",
"_____no_output_____"
]
],
[
[
"# Encoding object variables (city, street, zip code)\n\nfor col in housing.columns:\n if housing[col].dtype == 'object':\n encoded = pd.get_dummies(housing[col], drop_first=False)\n encoded = encoded.add_prefix('{}_'.format(col))\n housing.drop(col, axis=1, inplace=True)\n housing = housing.join(encoded)",
"_____no_output_____"
],
[
"# Creating the matrix of features and dependent variable vector \n\nX = housing.loc[:, housing.columns != 'price']\ny = housing.loc[:, 'price']",
"_____no_output_____"
],
[
"# Data set splitting\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 1)",
"_____no_output_____"
],
[
"# Feature scaling \n\nsc = StandardScaler()\nX_train.iloc[:, :12] = sc.fit_transform(X_train.iloc[:, :12])\nX_test.iloc[:, :12] = sc.transform(X_test.iloc[:, :12])",
"_____no_output_____"
]
],
[
[
"## <span style='color:steelblue'> 4. Model fitting </span>",
"_____no_output_____"
],
[
"Since Multiple Linear Regression performed poorly, Polynomial Regression was also attempted (in case of non-linearity predominance) but the results were also deficient. So a few other regression models were fit in this analysis:\n\n- Ridge Regression \n- Lasso Regression\n- Bayesian Ridge Regression\n- Gradient Boosting Regression (sklearn)\n- XGB regressor (XGBoost)\n\nThe model's evaluation was through the R-squared score ('r2_score' metric function from scikit-learn). R-squared is a statistical measure that represents the proportion of the variance for a dependent variable that is explained by an independent variable or variables. It is the most popular evaluation metric for regression models.\n\n\n**Note**: Sometimes the practical significance of the R-squared value can be misunderstood. R-squared is a measure of explanatory power, not fit. It indicates that a regression model has statistically significant explanatory power. So it must be considered as an effect size (measure of the strength of the relationship between variables in a statistical population), and thus, often relatively low values are expected.\nThe ideal ‘r2_score’ of a build should be more than 0.70 or at least higher than 0.60. If the 'r2_score' of a model is 0.50, for example, then approximately half of the observed variation can be explained by the model's inputs.",
"_____no_output_____"
]
],
[
[
"# Some regression models were cross validated through the following grid search method\n\nalphas = arange(0, 1, 0.01) # range of alpha values to test\ngrid = GridSearchCV(estimator=model, param_grid=dict(alpha=alphas)) # search grid \ngrid.fit(X, y) # fit\n\nprint(grid.best_score_) # summary of the grid search\nprint(grid.best_estimator_.alpha)",
"_____no_output_____"
],
[
"# Ridge regression\n\nridge = Ridge(alpha = 0.99) # alpha obtained through cross validation\nridge.fit(X_train, y_train)\nridge_ypred = ridge.predict(X_test)\nridge_r = r2(y_test, ridge_ypred)",
"_____no_output_____"
],
[
"# Bayesian ridge regression\n\nbayesian_r = BayesianRidge()\nbayesian_r.fit(X_train, y_train)\nbayesianr_ypred = bayesian_r.predict(X_test)\nbayesianr_r = r2(y_test, bayesianr_ypred)",
"_____no_output_____"
],
[
"# Lasso regression\n\nlasso = Lasso(alpha = 0.7) # alpha obtained through cross validation\nlasso.fit(X_train, y_train)\nlasso_yhat = lasso.predict(X_test)\nlasso_r = r2(y_test, lasso_yhat)",
"_____no_output_____"
],
[
"# Gradient boosting regressor\n\ngbr_reg = GradientBoostingRegressor()\ngbr_reg.fit(X_train, y_train)\ngbr_yhat = gbr_reg.predict(X_test)\ngbr_r = r2(y_test, gbr_yhat)",
"_____no_output_____"
],
[
"# XGBoost\n\nXGBoost = XGBRegressor(max_depth=7, n_estimators=1000, learning_rate=0.1) # parameter tuning was performed using xgb.cv\nXGBoost.fit(X_train,y_train)\nXGBoost_yhat = XGBoost.predict(X_test)\nXGB_r = r2(y_test, XGBoost_yhat)",
"_____no_output_____"
],
[
"# Results table\n\nprint(tabulate([['Ridge', ridge_r], ['Bayesian Ridge', bayesianr_r],\n ['Lasso', lasso_r], ['Gradient Boosting', gbr_r],\n ['XGBoost', XGB_r]], headers=['Regression model','R-squared'], tablefmt='orgtbl'))",
"| Regression model | R-squared |\n|--------------------+-------------|\n| Ridge | 0.602584 |\n| Bayesian Ridge | 0.603049 |\n| Lasso | 0.460363 |\n| Gradient Boosting | 0.633882 |\n| XGBoost | 0.709033 |\n"
]
],
[
[
"## <span style='color:steelblue'> 5. Discussion and conclusions </span>",
"_____no_output_____"
],
[
"\n* Ridge regression trades away much of the variance (due to multicollinearity) in exchange for a little bias, so it performed relatively well considering there was not that much multicollinearity. \n\n \n* Bayesian ridge regression is also a linear regression model with extra regularization parameters, like ridge regression, only a Bayesian approach equivalent. It very lightly improved the previous result.\n\n \n\n* Lasso regression performs variable selection that aims to increase prediction accuracy with a simpler model. It did worse than the previous 2 models, this method may be more suitable with more extense datasets.\n\n \n\n* Both scikit-learn and XGBoost's functions for gradient boosting regression were used. Gradient Boosting is a machine learning technique which builds an additive model (of weak prediciton models) in a forward stage-wise fashion, it typically uses decision trees. Usually, it even outperforms the Random Forest algorithm. In this case, they both outperformed the previous models.\n\n\nThe XGBoost model outperformed all the other regression models with an R-squared score of 0.709033.\n\n ",
"_____no_output_____"
]
],
[
[
"# Feature importance\n\nplt.figure(figsize=(14, 14))\nplot_importance(XGBoost, max_num_features=10)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"The XGBoost library also contains a function to plot feature importance (above).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7346ac354110543d75c9db749db305692d5a408 | 267,734 | ipynb | Jupyter Notebook | notebook/Coronavirus_Korea_Distribution.ipynb | ClementBM/Experiment_Coronavius | 1a531a782660006129ddd7ece4e333be50c6eb0a | [
"MIT"
] | null | null | null | notebook/Coronavirus_Korea_Distribution.ipynb | ClementBM/Experiment_Coronavius | 1a531a782660006129ddd7ece4e333be50c6eb0a | [
"MIT"
] | null | null | null | notebook/Coronavirus_Korea_Distribution.ipynb | ClementBM/Experiment_Coronavius | 1a531a782660006129ddd7ece4e333be50c6eb0a | [
"MIT"
] | null | null | null | 195.998536 | 57,806 | 0.849922 | [
[
[
"# Import Data",
"_____no_output_____"
]
],
[
[
"import os\n\n!git clone https://github.com/jihoo-kim/Coronavirus-Dataset\nPATIENT_PATH = \"/content/Coronavirus-Dataset/patient.csv\"",
"Cloning into 'Coronavirus-Dataset'...\nremote: Enumerating objects: 46, done.\u001b[K\nremote: Counting objects: 2% (1/46)\u001b[K\rremote: Counting objects: 4% (2/46)\u001b[K\rremote: Counting objects: 6% (3/46)\u001b[K\rremote: Counting objects: 8% (4/46)\u001b[K\rremote: Counting objects: 10% (5/46)\u001b[K\rremote: Counting objects: 13% (6/46)\u001b[K\rremote: Counting objects: 15% (7/46)\u001b[K\rremote: Counting objects: 17% (8/46)\u001b[K\rremote: Counting objects: 19% (9/46)\u001b[K\rremote: Counting objects: 21% (10/46)\u001b[K\rremote: Counting objects: 23% (11/46)\u001b[K\rremote: Counting objects: 26% (12/46)\u001b[K\rremote: Counting objects: 28% (13/46)\u001b[K\rremote: Counting objects: 30% (14/46)\u001b[K\rremote: Counting objects: 32% (15/46)\u001b[K\rremote: Counting objects: 34% (16/46)\u001b[K\rremote: Counting objects: 36% (17/46)\u001b[K\rremote: Counting objects: 39% (18/46)\u001b[K\rremote: Counting objects: 41% (19/46)\u001b[K\rremote: Counting objects: 43% (20/46)\u001b[K\rremote: Counting objects: 45% (21/46)\u001b[K\rremote: Counting objects: 47% (22/46)\u001b[K\rremote: Counting objects: 50% (23/46)\u001b[K\rremote: Counting objects: 52% (24/46)\u001b[K\rremote: Counting objects: 54% (25/46)\u001b[K\rremote: Counting objects: 56% (26/46)\u001b[K\rremote: Counting objects: 58% (27/46)\u001b[K\rremote: Counting objects: 60% (28/46)\u001b[K\rremote: Counting objects: 63% (29/46)\u001b[K\rremote: Counting objects: 65% (30/46)\u001b[K\rremote: Counting objects: 67% (31/46)\u001b[K\rremote: Counting objects: 69% (32/46)\u001b[K\rremote: Counting objects: 71% (33/46)\u001b[K\rremote: Counting objects: 73% (34/46)\u001b[K\rremote: Counting objects: 76% (35/46)\u001b[K\rremote: Counting objects: 78% (36/46)\u001b[K\rremote: Counting objects: 80% (37/46)\u001b[K\rremote: Counting objects: 82% (38/46)\u001b[K\rremote: Counting objects: 84% (39/46)\u001b[K\rremote: Counting objects: 86% (40/46)\u001b[K\rremote: Counting objects: 89% (41/46)\u001b[K\rremote: Counting objects: 91% (42/46)\u001b[K\rremote: Counting objects: 93% (43/46)\u001b[K\rremote: Counting objects: 95% (44/46)\u001b[K\rremote: Counting objects: 97% (45/46)\u001b[K\rremote: Counting objects: 100% (46/46)\u001b[K\rremote: Counting objects: 100% (46/46), done.\u001b[K\nremote: Compressing objects: 2% (1/46)\u001b[K\rremote: Compressing objects: 4% (2/46)\u001b[K\rremote: Compressing objects: 6% (3/46)\u001b[K\rremote: Compressing objects: 8% (4/46)\u001b[K\rremote: Compressing objects: 10% (5/46)\u001b[K\rremote: Compressing objects: 13% (6/46)\u001b[K\rremote: Compressing objects: 15% (7/46)\u001b[K\rremote: Compressing objects: 17% (8/46)\u001b[K\rremote: Compressing objects: 19% (9/46)\u001b[K\rremote: Compressing objects: 21% (10/46)\u001b[K\rremote: Compressing objects: 23% (11/46)\u001b[K\rremote: Compressing objects: 26% (12/46)\u001b[K\rremote: Compressing objects: 28% (13/46)\u001b[K\rremote: Compressing objects: 30% (14/46)\u001b[K\rremote: Compressing objects: 32% (15/46)\u001b[K\rremote: Compressing objects: 34% (16/46)\u001b[K\rremote: Compressing objects: 36% (17/46)\u001b[K\rremote: Compressing objects: 39% (18/46)\u001b[K\rremote: Compressing objects: 41% (19/46)\u001b[K\rremote: Compressing objects: 43% (20/46)\u001b[K\rremote: Compressing objects: 45% (21/46)\u001b[K\rremote: Compressing objects: 47% (22/46)\u001b[K\rremote: Compressing objects: 50% (23/46)\u001b[K\rremote: Compressing objects: 52% (24/46)\u001b[K\rremote: Compressing objects: 54% (25/46)\u001b[K\rremote: Compressing objects: 56% (26/46)\u001b[K\rremote: Compressing objects: 58% (27/46)\u001b[K\rremote: Compressing objects: 60% (28/46)\u001b[K\rremote: Compressing objects: 63% (29/46)\u001b[K\rremote: Compressing objects: 65% (30/46)\u001b[K\rremote: Compressing objects: 67% (31/46)\u001b[K\rremote: Compressing objects: 69% (32/46)\u001b[K\rremote: Compressing objects: 71% (33/46)\u001b[K\rremote: Compressing objects: 73% (34/46)\u001b[K\rremote: Compressing objects: 76% (35/46)\u001b[K\rremote: Compressing objects: 78% (36/46)\u001b[K\rremote: Compressing objects: 80% (37/46)\u001b[K\rremote: Compressing objects: 82% (38/46)\u001b[K\rremote: Compressing objects: 84% (39/46)\u001b[K\rremote: Compressing objects: 86% (40/46)\u001b[K\rremote: Compressing objects: 89% (41/46)\u001b[K\rremote: Compressing objects: 91% (42/46)\u001b[K\rremote: Compressing objects: 93% (43/46)\u001b[K\rremote: Compressing objects: 95% (44/46)\u001b[K\rremote: Compressing objects: 97% (45/46)\u001b[K\rremote: Compressing objects: 100% (46/46)\u001b[K\rremote: Compressing objects: 100% (46/46), done.\u001b[K\nReceiving objects: 0% (1/456) \rReceiving objects: 1% (5/456) \rReceiving objects: 2% (10/456) \rReceiving objects: 3% (14/456) \rReceiving objects: 4% (19/456) \rReceiving objects: 5% (23/456) \rReceiving objects: 6% (28/456) \rReceiving objects: 7% (32/456) \rReceiving objects: 8% (37/456) \rReceiving objects: 9% (42/456) \rReceiving objects: 10% (46/456) \rReceiving objects: 11% (51/456) \rReceiving objects: 12% (55/456) \rReceiving objects: 13% (60/456) \rReceiving objects: 14% (64/456) \rReceiving objects: 15% (69/456) \rReceiving objects: 16% (73/456) \rReceiving objects: 17% (78/456) \rReceiving objects: 18% (83/456) \rReceiving objects: 19% (87/456) \rReceiving objects: 20% (92/456) \rReceiving objects: 21% (96/456) \rReceiving objects: 22% (101/456) \rReceiving objects: 23% (105/456) \rReceiving objects: 24% (110/456) \rReceiving objects: 25% (114/456) \rReceiving objects: 26% (119/456) \rReceiving objects: 27% (124/456) \rReceiving objects: 28% (128/456) \rReceiving objects: 29% (133/456) \rReceiving objects: 30% (137/456) \rReceiving objects: 31% (142/456) \rReceiving objects: 32% (146/456) \rReceiving objects: 33% (151/456) \rReceiving objects: 34% (156/456) \rReceiving objects: 35% (160/456) \rReceiving objects: 36% (165/456) \rReceiving objects: 37% (169/456) \rReceiving objects: 38% (174/456) \rReceiving objects: 39% (178/456) \rReceiving objects: 40% (183/456) \rReceiving objects: 41% (187/456) \rReceiving objects: 42% (192/456) \rReceiving objects: 43% (197/456) \rReceiving objects: 44% (201/456) \rReceiving objects: 45% (206/456) \rReceiving objects: 46% (210/456) \rReceiving objects: 47% (215/456) \rReceiving objects: 48% (219/456) \rReceiving objects: 49% (224/456) \rReceiving objects: 50% (228/456) \rReceiving objects: 51% (233/456) \rReceiving objects: 52% (238/456) \rReceiving objects: 53% (242/456) \rReceiving objects: 54% (247/456) \rReceiving objects: 55% (251/456) \rReceiving objects: 56% (256/456) \rReceiving objects: 57% (260/456) \rReceiving objects: 58% (265/456) \rReceiving objects: 59% (270/456) \rReceiving objects: 60% (274/456) \rReceiving objects: 61% (279/456) \rReceiving objects: 62% (283/456) \rReceiving objects: 63% (288/456) \rReceiving objects: 64% (292/456) \rReceiving objects: 65% (297/456) \rReceiving objects: 66% (301/456) \rReceiving objects: 67% (306/456) \rReceiving objects: 68% (311/456) \rReceiving objects: 69% (315/456) \rReceiving objects: 70% (320/456) \rReceiving objects: 71% (324/456) \rReceiving objects: 72% (329/456) \rremote: Total 456 (delta 22), reused 0 (delta 0), pack-reused 410\u001b[K\nReceiving objects: 73% (333/456) \rReceiving objects: 74% (338/456) \rReceiving objects: 75% (342/456) \rReceiving objects: 76% (347/456) \rReceiving objects: 77% (352/456) \rReceiving objects: 78% (356/456) \rReceiving objects: 79% (361/456) \rReceiving objects: 80% (365/456) \rReceiving objects: 81% (370/456) \rReceiving objects: 82% (374/456) \rReceiving objects: 83% (379/456) \rReceiving objects: 84% (384/456) \rReceiving objects: 85% (388/456) \rReceiving objects: 86% (393/456) \rReceiving objects: 87% (397/456) \rReceiving objects: 88% (402/456) \rReceiving objects: 89% (406/456) \rReceiving objects: 90% (411/456) \rReceiving objects: 91% (415/456) \rReceiving objects: 92% (420/456) \rReceiving objects: 93% (425/456) \rReceiving objects: 94% (429/456) \rReceiving objects: 95% (434/456) \rReceiving objects: 96% (438/456) \rReceiving objects: 97% (443/456) \rReceiving objects: 98% (447/456) \rReceiving objects: 99% (452/456) \rReceiving objects: 100% (456/456) \rReceiving objects: 100% (456/456), 246.06 KiB | 2.03 MiB/s, done.\nResolving deltas: 0% (0/276) \rResolving deltas: 5% (14/276) \rResolving deltas: 6% (18/276) \rResolving deltas: 36% (100/276) \rResolving deltas: 38% (107/276) \rResolving deltas: 41% (115/276) \rResolving deltas: 44% (124/276) \rResolving deltas: 46% (128/276) \rResolving deltas: 48% (133/276) \rResolving deltas: 72% (201/276) \rResolving deltas: 73% (203/276) \rResolving deltas: 74% (205/276) \rResolving deltas: 81% (225/276) \rResolving deltas: 85% (235/276) \rResolving deltas: 87% (241/276) \rResolving deltas: 89% (247/276) \rResolving deltas: 90% (249/276) \rResolving deltas: 100% (276/276) \rResolving deltas: 100% (276/276), done.\n"
],
[
"import os\n\n!git clone https://github.com/ClementBM/Experiment_Coronavius.git\nPYRAMID_PATH = \"/content/Experiment_Coronavius/data/population-pyramid-south-korea.csv\"",
"Cloning into 'Experiment_Coronavius'...\nremote: Enumerating objects: 37, done.\u001b[K\nremote: Counting objects: 2% (1/37)\u001b[K\rremote: Counting objects: 5% (2/37)\u001b[K\rremote: Counting objects: 8% (3/37)\u001b[K\rremote: Counting objects: 10% (4/37)\u001b[K\rremote: Counting objects: 13% (5/37)\u001b[K\rremote: Counting objects: 16% (6/37)\u001b[K\rremote: Counting objects: 18% (7/37)\u001b[K\rremote: Counting objects: 21% (8/37)\u001b[K\rremote: Counting objects: 24% (9/37)\u001b[K\rremote: Counting objects: 27% (10/37)\u001b[K\rremote: Counting objects: 29% (11/37)\u001b[K\rremote: Counting objects: 32% (12/37)\u001b[K\rremote: Counting objects: 35% (13/37)\u001b[K\rremote: Counting objects: 37% (14/37)\u001b[K\rremote: Counting objects: 40% (15/37)\u001b[K\rremote: Counting objects: 43% (16/37)\u001b[K\rremote: Counting objects: 45% (17/37)\u001b[K\rremote: Counting objects: 48% (18/37)\u001b[K\rremote: Counting objects: 51% (19/37)\u001b[K\rremote: Counting objects: 54% (20/37)\u001b[K\rremote: Counting objects: 56% (21/37)\u001b[K\rremote: Counting objects: 59% (22/37)\u001b[K\rremote: Counting objects: 62% (23/37)\u001b[K\rremote: Counting objects: 64% (24/37)\u001b[K\rremote: Counting objects: 67% (25/37)\u001b[K\rremote: Counting objects: 70% (26/37)\u001b[K\rremote: Counting objects: 72% (27/37)\u001b[K\rremote: Counting objects: 75% (28/37)\u001b[K\rremote: Counting objects: 78% (29/37)\u001b[K\rremote: Counting objects: 81% (30/37)\u001b[K\rremote: Counting objects: 83% (31/37)\u001b[K\rremote: Counting objects: 86% (32/37)\u001b[K\rremote: Counting objects: 89% (33/37)\u001b[K\rremote: Counting objects: 91% (34/37)\u001b[K\rremote: Counting objects: 94% (35/37)\u001b[K\rremote: Counting objects: 97% (36/37)\u001b[K\rremote: Counting objects: 100% (37/37)\u001b[K\rremote: Counting objects: 100% (37/37), done.\u001b[K\nremote: Compressing objects: 100% (33/33), done.\u001b[K\nremote: Total 37 (delta 12), reused 25 (delta 4), pack-reused 0\u001b[K\nUnpacking objects: 100% (37/37), done.\n"
],
[
"import pandas as pd\n\ndf_korea_patients = pd.read_csv(PATIENT_PATH)\ndf_korea_population_pyramid = pd.read_csv(PYRAMID_PATH)",
"_____no_output_____"
]
],
[
[
"# EDA Patients",
"_____no_output_____"
]
],
[
[
"df_korea_patients.head()\ndisplay(df_korea_patients.head())\ndisplay(df_korea_patients.shape)\ndisplay(df_korea_patients.columns)\ndisplay(df_korea_patients.iloc[:10,:10].dtypes)",
"_____no_output_____"
]
],
[
[
"## Cleaning data",
"_____no_output_____"
]
],
[
[
"# drop sample if sex or birth_year is NaN\nnot_nan = df_korea_patients['birth_year'].notna() & df_korea_patients['sex'].notna()\ndf_korea_patients = df_korea_patients[not_nan]\n# typo\ndf_korea_patients[\"sex\"] = df_korea_patients[\"sex\"].replace(\"feamle\", \"female\")\n\ndf_korea_patients['age'] = 2020 - df_korea_patients['birth_year'] ",
"_____no_output_____"
],
[
"df_korea_patients.head()\ndisplay(df_korea_patients.head())\ndisplay(df_korea_patients.shape)\ndisplay(df_korea_patients.columns)\ndisplay(df_korea_patients.iloc[:10,:10].dtypes)",
"_____no_output_____"
]
],
[
[
"## Distribution",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(14,10))\nsns.violinplot(x=\"state\", y=\"age\", hue=\"sex\", data=df_korea_patients,\n order=[\"deceased\", \"isolated\", \"released\"], \n palette={\"female\": \"#d98b5f\", \n \"male\": \"#597dbf\"}, \n split=True)\nplt.show()",
"_____no_output_____"
],
[
"df_korea_patients_by_age_sex = df_korea_patients.groupby([\"state\", \"sex\"], \n as_index=False)[\"state\", \"sex\"].size()\n\ndf_korea_patients_by_age_sex.unstack().plot(kind='bar', \n color=['#d98b5f', '#597dbf'],\n figsize=(14, 10))\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Age pyramid",
"_____no_output_____"
]
],
[
[
"df_korea_population_pyramid.plot(kind='barh',\n x=\"age_range\",\n color=['#597dbf', '#d98b5f'],\n figsize=(14, 10))\nplt.xlabel(\"Population\")\nplt.ylabel(\"Age range\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Adding age range ",
"_____no_output_____"
]
],
[
[
"import math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef group_age(age, window):\n if age > 99:\n return \"100+\"\n if age % 5 != 0:\n lower = int(math.floor(age / float(window))) * window\n upper = int(math.ceil(age / float(window))) * window - 1\n return f\"{lower}-{upper}\"\n else:\n lower = int(age)\n upper = int(age + window - 1) \n return f\"{lower}-{upper}\"\n\ndef group_age_by_5(age):\n return group_age(age, 5)",
"_____no_output_____"
],
[
"df_korea_patients[\"age_range\"] = df_korea_patients[\"age\"].apply(group_age_by_5)",
"_____no_output_____"
]
],
[
[
"### Age range dictionary for ordering values",
"_____no_output_____"
]
],
[
[
"age_range_order = df_korea_population_pyramid[\"age_range\"].to_dict()\nage_range_order = {v: k for k, v in age_range_order.items()}",
"_____no_output_____"
]
],
[
[
"### Age pyramid proportion",
"_____no_output_____"
]
],
[
[
"total_male = sum(df_korea_population_pyramid.loc[:,'male'])\ntotal_female = sum(df_korea_population_pyramid.loc[:,'female'])\n\ndf_korea_population_pyramid[\"male_prop\"] = df_korea_population_pyramid[\"male\"] / total_male\ndf_korea_population_pyramid[\"female_prop\"] = df_korea_population_pyramid[\"female\"] / total_female",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"def infected_population_normed(df):\n result = df.groupby([\"age_range\", \"sex\"], as_index=False)[\"age_range\", \"sex\"].size()\n \n result = (\n pd.DataFrame(result)\n .pivot_table(index=[\"age_range\"], columns=[\"sex\"], fill_value=0.0)\n .reset_index()\n )\n\n result = result.set_index(\"age_range\")\n \n result = result[0] # unstack() ?\n result[\"order\"] = [age_range_order[x] for x in result.index]\n result = result.sort_values(by=\"order\")\n \n indices = [age_range_order[x] for x in result.index]\n df_normed = pd.DataFrame(index=df_korea_population_pyramid[\"age_range\"], columns=[\"male\",\"female\"])\n\n df_normed.loc[result.index, \"male\"] = result[\"male\"] / df_korea_population_pyramid.loc[indices][\"male_prop\"].values\n df_normed.loc[result.index, \"female\"] = result[\"female\"] / df_korea_population_pyramid.loc[indices][\"female_prop\"].values\n\n df_normed = df_normed.replace(np.inf, 0)\n df_normed = df_normed.replace(np.nan, 0)\n\n df_normed[\"male\"] = df_normed[\"male\"] * 100 / sum(df_normed[\"male\"])\n df_normed[\"female\"] = df_normed[\"female\"] * 100 / sum(df_normed[\"female\"])\n\n distribution_normed = pd.DataFrame({'male': df_normed[\"male\"], 'female': df_normed[\"female\"]})\n distribution_normed = distribution_normed.reset_index()\n\n return distribution_normed",
"_____no_output_____"
],
[
"def plot_normed_distribution(df):\n distribution_normed = infected_population_normed(df)\n\n df = pd.DataFrame({'male': distribution_normed['male'].values, 'female': distribution_normed['female'].values}, index=distribution_normed[\"age_range\"])\n df.plot.barh(figsize=(14, 10), color=['#597dbf', '#d98b5f'])\n\n plt.xlabel(\"Proportion %\")\n plt.ylabel(\"Age range\")\n plt.legend()\n plt.show()",
"_____no_output_____"
]
],
[
[
"## Normed distribution of all",
"_____no_output_____"
]
],
[
[
"display(df_korea_patients.shape[0])\nplot_normed_distribution(df_korea_patients)",
"_____no_output_____"
],
[
"display(df_korea_patients[df_korea_patients[\"age_range\"] == \"100+\"])\ndisplay(df_korea_population_pyramid[df_korea_population_pyramid[\"age_range\"] == \"100+\"])",
"_____no_output_____"
],
[
"display(df_korea_patients.shape[0])\nplot_normed_distribution(df_korea_patients[df_korea_patients[\"age_range\"] != \"100+\"])",
"_____no_output_____"
]
],
[
[
"## Normed proportion of **deceased**",
"_____no_output_____"
]
],
[
[
"df_deceased = df_korea_patients[df_korea_patients[\"state\"] == \"deceased\"]\ndisplay(df_deceased.shape[0])\nplot_normed_distribution(df_deceased)",
"_____no_output_____"
]
],
[
[
"## Normed proportion of **released**",
"_____no_output_____"
]
],
[
[
"df_released = df_korea_patients[df_korea_patients[\"state\"] == \"released\"]\ndisplay(df_released.shape[0])\nplot_normed_distribution(df_released)",
"_____no_output_____"
]
],
[
[
"## Normed proportion of **isolated**",
"_____no_output_____"
]
],
[
[
"df_released = df_korea_patients[(df_korea_patients[\"state\"] == \"isolated\") & (df_korea_patients[\"age_range\"] != \"100+\")]\ndisplay(df_released.shape[0])\nplot_normed_distribution(df_released)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7348a0f0138bf45090c79b882e3ff0e6fee379f | 676,977 | ipynb | Jupyter Notebook | notebooks/zeisel/Zeisel_SGBM_dup.ipynb | redst4r/arboreto | 3ff7b6f987b32e5774771751dea646fa6feaaa52 | [
"BSD-3-Clause"
] | 20 | 2018-06-28T07:00:47.000Z | 2020-10-08T08:58:22.000Z | notebooks/zeisel/Zeisel_SGBM_dup.ipynb | redst4r/arboreto | 3ff7b6f987b32e5774771751dea646fa6feaaa52 | [
"BSD-3-Clause"
] | 23 | 2018-06-06T13:11:20.000Z | 2021-01-08T03:37:43.000Z | notebooks/zeisel/Zeisel_SGBM_dup.ipynb | redst4r/arboreto | 3ff7b6f987b32e5774771751dea646fa6feaaa52 | [
"BSD-3-Clause"
] | 15 | 2018-11-21T08:21:46.000Z | 2020-11-25T06:28:32.000Z | 548.159514 | 266,114 | 0.929024 | [
[
[
"# Zeisel GRN Inference and Analysis (nb duplicate)",
"_____no_output_____"
],
[
"## 0. Import dependencies",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nsys.path.append('../../')\n\nfrom arboreto.core import *\nfrom arboreto.utils import *\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## 1. Load the data (outside the scope of the arboreto API)",
"_____no_output_____"
]
],
[
[
"zeisel_ex_path = '/media/tmo/data/work/datasets/zeisel/expression_sara_filtered.txt'\nzeisel_tf_path = '/media/tmo/data/work/datasets/TF/mm9_TFs.txt'",
"_____no_output_____"
],
[
"zeisel_df = pd.read_csv(zeisel_ex_path, index_col=0, sep='\\t').T\nzeisel_df.head()",
"_____no_output_____"
],
[
"zeisel_ex_matrix = zeisel_df.as_matrix().astype(np.float)\nzeisel_ex_matrix",
"_____no_output_____"
],
[
"assert(zeisel_ex_matrix.shape == (3005, 13063))",
"_____no_output_____"
],
[
"zeisel_gene_names = list(zeisel_df.columns)\nzeisel_gene_names[:5]",
"_____no_output_____"
],
[
"zeisel_tf_names = load_tf_names(zeisel_tf_path)\nzeisel_tf_names[:5]",
"_____no_output_____"
]
],
[
[
"# X. Calculate a 'signal' measure\n\n* count the number of non-zero entries per column",
"_____no_output_____"
]
],
[
[
"signal_series = zeisel_df.astype(bool).sum(axis=0)",
"_____no_output_____"
],
[
"nonzero_df = signal_series.to_frame('non_zero').sort_values(by='non_zero', ascending=False).reset_index()\nnonzero_df.columns = ['target', 'non_zero']\n\nnonzero_df.to_csv('zeisel_nonzero.tsv', sep='\\t')\nnonzero_df.head()",
"_____no_output_____"
],
[
"nonzero_df.merge(meta_df, on=['target'])[['n_estimators', 'non_zero']].plot(x='non_zero', y='n_estimators', kind='scatter', figsize=(16,16))\nplt.savefig('n_nonzero_vs_n_estimators.png')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2. Initialize Dask client",
"_____no_output_____"
]
],
[
[
"from dask.distributed import Client, LocalCluster",
"_____no_output_____"
],
[
"client = Client(LocalCluster(memory_limit=8e9))",
"_____no_output_____"
],
[
"client",
"_____no_output_____"
]
],
[
[
"If you work remotely, use port forwarding to view the dashboard:\n\n```bash\n$ ssh -L 8000:localhost:8787 nostromo\n```",
"_____no_output_____"
]
],
[
[
"client.shutdown()",
"_____no_output_____"
]
],
[
[
"## 3. Compute GRN inference graph",
"_____no_output_____"
],
[
"#### Create the dask computation graphs",
"_____no_output_____"
]
],
[
[
"%%time\nnetwork_graph, meta_graph = create_graph(zeisel_ex_matrix,\n zeisel_gene_names,\n zeisel_tf_names,\n \"GBM\",\n SGBM_KWARGS,\n target_genes='all',\n early_stop_window_length=25,\n include_meta=True)",
"CPU times: user 11.4 s, sys: 1.94 s, total: 13.3 s\nWall time: 10.7 s\n"
]
],
[
[
"#### Persist the distributed DataFrames",
"_____no_output_____"
]
],
[
[
"%%time\na, b = client.persist([network_graph, meta_graph])",
"_____no_output_____"
]
],
[
[
"#### Compute results",
"_____no_output_____"
]
],
[
[
"%%time\nnetwork_df = a.compute(sync=True)",
"CPU times: user 18.2 s, sys: 2.58 s, total: 20.8 s\nWall time: 19.8 s\n"
]
],
[
[
"* CPU times: user 8min 15s, sys: 5min 41s, total: 13min 56s\n* Wall time: **16min 30s**",
"_____no_output_____"
]
],
[
[
"%%time\nmeta_df = b.compute(sync=True)",
"CPU times: user 16.6 s, sys: 1.51 s, total: 18.2 s\nWall time: 17.3 s\n"
]
],
[
[
"## 4. Save full and top_100k networks to file",
"_____no_output_____"
]
],
[
[
"len(network_df)",
"_____no_output_____"
],
[
"len(meta_df)",
"_____no_output_____"
],
[
"meta_df.to_csv('zeisel_meta_df.tsv', sep='\\t')",
"_____no_output_____"
],
[
"network_df.sort_values(by='importance', ascending=0).to_csv('zeisel_sgbm_all.txt', index=False, sep='\\t')",
"_____no_output_____"
],
[
"top_100k = network_df.nlargest(100000, columns=['importance'])",
"_____no_output_____"
],
[
"top_100k.to_csv('zeisel_sgbm_100k.txt', index=False, sep='\\t')",
"_____no_output_____"
],
[
"merged_df = top_100k.merge(meta_df, on='target')",
"_____no_output_____"
],
[
"merged_df.head()",
"_____no_output_____"
],
[
"merged_df['imp2'] = merged_df['importance'] / merged_df['n_estimators']",
"_____no_output_____"
],
[
"top_100k.plot(use_index=0, figsize=(16,9))\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Distribution of nr of boosting rounds per regression",
"_____no_output_____"
]
],
[
[
"meta_df.hist(bins=100, figsize=(20, 9), log=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Plot the maximum variable importance (sklearn default) vs. nr of boosting rounds\n\n* **!= the formula in Arboreto**\n* Using the sklearn default variable importances which normalizes regressions by dividing by nr of trees in the ensemble.\n* Effect is that regressions with few trees also deliver high feature importances (aka network links), this is undesirable.\n* In Arboreto, we omit this normalization step to make use of the nr of trees as a heuristic indicator of how much *signal* there is in a regression.",
"_____no_output_____"
]
],
[
[
"max_imp2_by_rounds =\\\nmeta_df.merge(merged_df.groupby(['target'])['imp2'].nlargest(1).reset_index(), \n how='left', \n on=['target'])\n\nmax_imp2_by_rounds.plot.scatter(x='n_estimators', y='imp2', figsize=(16, 9))\nplt.show()",
"_____no_output_____"
],
[
"max_imp2_by_rounds.plot.hexbin(x='n_estimators', \n y='imp2', \n bins='log', \n cmap='inferno',\n figsize=(16, 9))\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Plotting corrected feature importance (Arboreto SGBM default) vs. nr of boosting rounds",
"_____no_output_____"
]
],
[
[
"max_imp_by_rounds =\\\nmeta_df.merge(network_df.groupby(['target'])['importance'].nlargest(1).reset_index(), \n how='left', \n on=['target'])\n\nmax_imp_by_rounds.plot.scatter(x='n_estimators', y='importance', figsize=(16, 9))\nplt.show()",
"_____no_output_____"
],
[
"max_imp_by_rounds.plot.hexbin(x='n_estimators', \n bins='log',\n cmap='inferno',\n y='importance',\n figsize=(16, 9))\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Links in common with GENIE3",
"_____no_output_____"
]
],
[
[
"z_genie3 = pd.read_csv('/media/tmo/data/work/datasets/benchmarks/genie3/zeisel/zeisel.filtered.genie3.txt', header=None, sep='\\t')\nz_genie3.columns=['TF', 'target', 'importance']",
"_____no_output_____"
],
[
"inner = z_genie3.merge(top_100k, how='inner', on=['TF', 'target'])",
"_____no_output_____"
],
[
"len(inner)",
"_____no_output_____"
],
[
"inner_50k = z_genie3[:50000].merge(top_100k[:50000], how='inner', on=['TF', 'target'])",
"_____no_output_____"
],
[
"len(inner_50k)",
"_____no_output_____"
],
[
"inner_25k = z_genie3[:25000].merge(top_100k[:25000], how='inner', on=['TF', 'target'])",
"_____no_output_____"
],
[
"len(inner_25k) / 25000",
"_____no_output_____"
],
[
"inner_10k = z_genie3[:10000].merge(top_100k[:10000], how='inner', on=['TF', 'target'])",
"_____no_output_____"
],
[
"len(inner_10k)",
"_____no_output_____"
],
[
"inner_5k = z_genie3[:5000].merge(top_100k[:5000], how='inner', on=['TF', 'target'])",
"_____no_output_____"
],
[
"len(inner_5k)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e734940c841b9aade238f46d2a05de5475030fd2 | 148,508 | ipynb | Jupyter Notebook | demos/time-series/gcn-lstm-time-series.ipynb | lyubov888L/stellargraph | cc15f176c6658d122d30cf7af3e08d3e139b3974 | [
"Apache-2.0"
] | null | null | null | demos/time-series/gcn-lstm-time-series.ipynb | lyubov888L/stellargraph | cc15f176c6658d122d30cf7af3e08d3e139b3974 | [
"Apache-2.0"
] | null | null | null | demos/time-series/gcn-lstm-time-series.ipynb | lyubov888L/stellargraph | cc15f176c6658d122d30cf7af3e08d3e139b3974 | [
"Apache-2.0"
] | null | null | null | 145.73896 | 64,068 | 0.865448 | [
[
[
"# Forecasting using spatio-temporal data with combined Graph Convolution + LSTM model",
"_____no_output_____"
],
[
"<table><tr><td>Run the latest release of this notebook:</td><td><a href=\"https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/time-series/gcn-lstm-time-series.ipynb\" alt=\"Open In Binder\" target=\"_parent\"><img src=\"https://mybinder.org/badge_logo.svg\"/></a></td><td><a href=\"https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/time-series/gcn-lstm-time-series.ipynb\" alt=\"Open In Colab\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a></td></tr></table>",
"_____no_output_____"
],
[
"The dynamics of many real-world phenomena are spatio-temporal in nature. Traffic forecasting is a quintessential example of spatio-temporal problems for which we present here a deep learning framework that models speed prediction using spatio-temporal data. The task is challenging due to two main inter-linked factors: (1) the complex spatial dependency on road networks, and (2) non-linear temporal dynamics with changing road conditions.\n\nTo address these challenges, here we explore a neural network architecture that learns from both the spatial road network data and time-series of historical speed changes to forecast speeds on road segments at a future time. In the following we demo how to forecast speeds on road segments through a `graph convolution` and `LSTM` hybrid model. The spatial dependency of the road networks are learnt through multiple graph convolution layers stacked over multiple LSTM, sequence to sequence model, layers that leverage the historical speeds on top of the network structure to predicts speeds in the future for each entity. \n\nThe architecture of the gcn-lstm model is inpired by the paper: [T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction](https://ieeexplore.ieee.org/document/8809901).\n\nThe authors have made available the implementation of their model in their github [repo](https://github.com/lehaifeng/T-GCN).\nThere has been a few differences in the architecture proposed in the paper and the implementation of the graph convolution component, these issues have been documented [here](https://github.com/lehaifeng/T-GCN/issues/18) and [here](https://github.com/lehaifeng/T-GCN/issues/14). The `GraphConvolutionLSTM` model in `StellarGraph` emulates the model as explained in the paper while giving additional flexibility of adding any number of `graph convolution` and `LSTM` layers. \n\nConcretely, the architecture of `GraphConvolutionLSTM` is as follows:\n\n1. User defined number of graph convolutional layers (Reference: [Kipf & Welling (ICLR 2017)](http://arxiv.org/abs/1609.02907)).\n2. User defined number of LSTM layers. The [TGCN](https://ieeexplore.ieee.org/document/8809901) uses GRU instead of LSTM. In practice there are not any remarkable differences between the two types of layers. We use LSTM as they are more frequently used.\n3. A Dropout and a Dense layer as they experimentally showed improvement in performance and managing over-fitting.\n\n## References: \n\n* [T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction](https://ieeexplore.ieee.org/document/8809901)\n* [https://github.com/lehaifeng/T-GCN](https://github.com/lehaifeng/T-GCN)\n* [Semi-Supervised Classification with Graph Convolutional Networks](http://arxiv.org/abs/1609.02907)\n\n**Note: this method is applicable for uni-variate timeseries forecasting.**",
"_____no_output_____"
]
],
[
[
"# install StellarGraph if running on Google Colab\nimport sys\nif 'google.colab' in sys.modules:\n %pip install -q stellargraph[demos]==1.1.0b",
"_____no_output_____"
],
[
"# verify that we're using the correct version of StellarGraph for this notebook\nimport stellargraph as sg\n\ntry:\n sg.utils.validate_notebook_version(\"1.1.0b\")\nexcept AttributeError:\n raise ValueError(\n f\"This notebook requires StellarGraph version 1.1.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>.\"\n ) from None",
"_____no_output_____"
],
[
"import os\nimport sys\nimport urllib.request\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.lines as mlines\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import Sequential, Model\nfrom tensorflow.keras.layers import LSTM, Dense, Dropout, Input",
"_____no_output_____"
]
],
[
[
"## Data\n\nWe apply the gcn-lstm model to the **Los-loop** data. This traffic dataset\ncontains traffic information collected from loop detectors in the highway of Los Angeles County (Jagadish\net al., 2014). There are several processed versions of this dataset used by the research community working in Traffic forecasting space. \n\nThis demo is based on the pre-processed version of the dataset used by the TGCN paper. It can be directly accessed from there [github repo](https://github.com/lehaifeng/T-GCN/tree/master/data). \n\nThis dataset contains traffic speeds from Mar.1 to Mar.7, 2012 of 207 sensors, recorded every 5 minutes. \n\nIn order to use the model, we need:\n\n* A N by N adjacency matrix, which describes the distance relationship between the N sensors,\n* A N by T feature matrix, which describes the (f_1, .., f_T) speed records over T timesteps for the N sensors.\n\nA couple of other references for the same data albeit different time length are as follows: \n\n* [DIFFUSION CONVOLUTIONAL RECURRENT NEURAL NETWORK: DATA-DRIVEN TRAFFIC FORECASTING](https://github.com/liyaguang/DCRNN/tree/master/data): This dataset consists of 207 sensors and collect 4 months of data ranging from Mar 1st 2012 to Jun 30th 2012 for the experiment. It has some missing values.\n* [ST-MetaNet: Urban Traffic Prediction from Spatio-Temporal Data Using Deep Meta Learning](https://github.com/panzheyi/ST-MetaNet/tree/master/traffic-prediction). This work uses the DCRNN pre-proccessed data.",
"_____no_output_____"
],
[
"## Loading and pre-processing the data",
"_____no_output_____"
]
],
[
[
"import stellargraph as sg",
"_____no_output_____"
]
],
[
[
"This demo is based on the pre-processed version of the dataset used by the TGCN paper.",
"_____no_output_____"
]
],
[
[
"dataset = sg.datasets.METR_LA()",
"_____no_output_____"
]
],
[
[
"(See [the \"Loading from Pandas\" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.)",
"_____no_output_____"
]
],
[
[
"speed_data, sensor_dist_adj = dataset.load()\nnum_nodes = speed_data.shape[1]\ntime_len = speed_data.shape[0]\nprint(\"No. of sensors:\", num_nodes, \"\\nNo of timesteps:\", time_len)",
"No. of sensors: 207 \nNo of timesteps: 2016\n"
]
],
[
[
"**Let's look at a sample of speed data.**",
"_____no_output_____"
]
],
[
[
"speed_data.head()",
"_____no_output_____"
]
],
[
[
"As you can see above, there are 2016 observations (timesteps) of speed records over 207 sensors. Speeds are recorded every 5 minutes. This means that, for a single hour, you will have 12 observations. Similarly, a single day will contain 288 (12x24) observations. Overall, the data consists of speeds recorded every 5 minutes over 207 for 7 days (12X24X7).\n\n### Forecasting with spatio-temporal data as a supervised learing problem \n\nTime series forecasting problem can be cast as a supervised learning problem. We can do this by using previous timesteps as input features and use the next timestep as the output to predict. Then, the spatio-temporal forecasting question can be modeled as predicting the feature value in the future, given the historical values of the feature for that entity as well as the feature values of the entities \"connected\" to the entity. For example, the speed prediction problem, the historical speeds of the sensors are the timeseries and the distance between the sensors is the indicator for connectivity or closeness of sensors.",
"_____no_output_____"
],
[
"### Train/test split\n\nJust like for modeling any standard supervised learning problem, we first split the data into mutually exclusive train and test sets. However, unlike, a standard supervised learning problem, in timeseries analysis, the data is in some choronological time respecting order and the train/test happens along the timeline. Lets say, we use the first `T_t` observations for training and the remaining `T - T_t` of the total `T` observations for testing. \n\nIn the following we use first 80% observations for training and the rest for testing.",
"_____no_output_____"
]
],
[
[
"def train_test_split(data, train_portion):\n time_len = data.shape[0]\n train_size = int(time_len * train_portion)\n train_data = np.array(data[:train_size])\n test_data = np.array(data[train_size:])\n return train_data, test_data",
"_____no_output_____"
],
[
"train_rate = 0.8",
"_____no_output_____"
],
[
"train_data, test_data = train_test_split(speed_data, train_rate)\nprint(\"Train data: \", train_data.shape)\nprint(\"Test data: \", test_data.shape)",
"Train data: (1612, 207)\nTest data: (404, 207)\n"
]
],
[
[
"### Scaling\nIt is generally a good practice to rescale the data from the original range so that all values are within the range of 0 and 1. Normalization can be useful and even necessary when your time series data has input values with differing scales. In the following we normalize the speed timeseries by the maximum and minimum values of speeds in the train data. \n\nNote: `MinMaxScaler` in `scikit learn` library is typically used for transforming data. However, in timeseries data since the features are distinct timesteps, so using the historical range of values in a particular timestep as the range of values in later timesteps, may not be correct. Hence, we use the maximum and the minimum of the entire range of values in the timeseries to scale and transform the train and test sets respectively.",
"_____no_output_____"
]
],
[
[
"def scale_data(train_data, test_data):\n max_speed = train_data.max()\n min_speed = train_data.min()\n train_scaled = (train_data - min_speed) / (max_speed - min_speed)\n test_scaled = (test_data - min_speed) / (max_speed - min_speed)\n return train_scaled, test_scaled",
"_____no_output_____"
],
[
"train_scaled, test_scaled = scale_data(train_data, test_data)",
"_____no_output_____"
]
],
[
[
"### Sequence data preparation for LSTM\n\nWe first need to prepare the data to be fed into an LSTM. \nThe LSTM model learns a function that maps a sequence of past observations as input to an output observation. As such, the sequence of observations must be transformed into multiple examples from which the LSTM can learn.\n\nTo make it concrete in terms of the speed prediction problem, we choose to use 50 minutes of historical speed observations to predict the speed in future, lets say, 1 hour ahead. Hence, we would first reshape the timeseries data into windows of 10 historical observations for each segment as the input and the speed 60 minutes later is the label we are interested in predicting. We use the sliding window approach to prepare the data. This is how it works: \n\n* Starting from the beginning of the timeseries, we take the first 10 speed records as the 10 input features and the speed 12 timesteps head (60 minutes) as the speed we want to predict. \n* Shift the timeseries by one timestep and take the 10 observations from the current point as the input feartures and the speed one hour ahead as the output to predict. \n* Keep shifting by 1 timestep and picking the 10 timestep window from the current time as input feature and the speed one hour ahead of the 10th timestep as the output to predict, for the entire data.\n* The above steps are done for each sensor. \n\nThe function below returns the above transformed timeseries data for the model to train on. The parameter `seq_len` is the size of the past window of information. The `pre_len` is how far in the future does the model need to learn to predict. \n\nFor this demo: \n\n* Each training observation are 10 historical speeds (`seq_len`).\n* Each training prediction is the speed 60 minutes later (`pre_len`).",
"_____no_output_____"
]
],
[
[
"seq_len = 10\npre_len = 12",
"_____no_output_____"
],
[
"def sequence_data_preparation(seq_len, pre_len, train_data, test_data):\n trainX, trainY, testX, testY = [], [], [], []\n\n for i in range(len(train_data) - int(seq_len + pre_len - 1)):\n a = train_data[\n i : i + seq_len + pre_len,\n ]\n trainX.append(a[:seq_len])\n trainY.append(a[-1])\n\n for i in range(len(test_data) - int(seq_len + pre_len - 1)):\n b = test_data[\n i : i + seq_len + pre_len,\n ]\n testX.append(\n b[:seq_len,]\n )\n testY.append(b[-1])\n\n trainX = np.array(trainX)\n trainY = np.array(trainY)\n testX = np.array(testX)\n testY = np.array(testY)\n\n return trainX, trainY, testX, testY",
"_____no_output_____"
],
[
"trainX, trainY, testX, testY = sequence_data_preparation(\n seq_len, pre_len, train_scaled, test_scaled\n)\nprint(trainX.shape)\nprint(trainY.shape)\nprint(testX.shape)\nprint(testY.shape)",
"(1591, 10, 207)\n(1591, 207)\n(383, 10, 207)\n(383, 207)\n"
]
],
[
[
"## StellarGraph Graph Convolution and LSTM model",
"_____no_output_____"
]
],
[
[
"from stellargraph.layer import GraphConvolutionLSTM",
"_____no_output_____"
],
[
"gcn_lstm = GraphConvolutionLSTM(\n seq_len=seq_len,\n adj=sensor_dist_adj,\n gc_layers=2,\n gc_activations=[\"relu\", \"relu\"],\n lstm_layer_size=[200],\n lstm_activations=[\"tanh\"],\n)",
"_____no_output_____"
],
[
"x_input, x_output = gcn_lstm.in_out_tensors()",
"_____no_output_____"
],
[
"model = Model(inputs=x_input, outputs=x_output)",
"_____no_output_____"
],
[
"model.compile(optimizer=\"adam\", loss=\"mae\", metrics=[\"mse\"])",
"_____no_output_____"
],
[
"history = model.fit(\n trainX,\n trainY,\n epochs=100,\n batch_size=60,\n shuffle=True,\n verbose=0,\n validation_data=[testX, testY],\n)",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 10, 207)] 0 \n_________________________________________________________________\nfixed_adjacency_graph_convol (None, 10, 207) 43156 \n_________________________________________________________________\nfixed_adjacency_graph_convol (None, 10, 207) 43156 \n_________________________________________________________________\nlstm (LSTM) (None, 200) 326400 \n_________________________________________________________________\ndropout (Dropout) (None, 200) 0 \n_________________________________________________________________\ndense (Dense) (None, 207) 41607 \n=================================================================\nTotal params: 454,319\nTrainable params: 368,621\nNon-trainable params: 85,698\n_________________________________________________________________\n"
],
[
"print(\n \"Train loss: \",\n history.history[\"loss\"][-1],\n \"\\nTest loss:\",\n history.history[\"val_loss\"][-1],\n)",
"Train loss: 0.05301835040235804 \nTest loss: 0.06069195360995771\n"
],
[
"plt.plot(history.history[\"loss\"], label=\"Training loss\")\nplt.plot(history.history[\"val_loss\"], label=\"Test loss\")\nplt.legend()\nplt.xlabel(\"epoch\")\nplt.ylabel(\"loss\")\nplt.show()",
"_____no_output_____"
],
[
"ythat = model.predict(trainX)\nyhat = model.predict(testX)",
"_____no_output_____"
]
],
[
[
"## Rescale values\n\nRecale the predicted values to the original value range of the timeseries.",
"_____no_output_____"
]
],
[
[
"## Rescale values\nmax_speed = train_data.max()\nmin_speed = train_data.min()\n\n## actual train and test values\ntrain_rescref = np.array(trainY * max_speed)\ntest_rescref = np.array(testY * max_speed)",
"_____no_output_____"
],
[
"## Rescale model predicted values\ntrain_rescpred = np.array((ythat) * max_speed)\ntest_rescpred = np.array((yhat) * max_speed)",
"_____no_output_____"
]
],
[
[
"## Measuring the performance of the model\n\nTo understand how well the model is performing, we compare it against a naive benchmark.\n\n1. Naive prediction: using the most recently **observed** value as the predicted value. Note, that albeit being **naive** this is a very strong baseline to beat. Especially, when speeds are recorded at a 5 minutes granularity, one does not expect many drastic changes within such a short period of time. Hence, for short-term predictions naive is a reasonable good guess.",
"_____no_output_____"
],
[
"### Naive prediction benchmark (using latest observed value)",
"_____no_output_____"
]
],
[
[
"## Naive prediction benchmark (using previous observed value)\n\ntestnpred = np.array(testX).transpose(1, 0, 2)[\n -1\n] # picking the last speed of the 10 sequence for each segment in each sample\ntestnpredc = (testnpred) * max_speed",
"_____no_output_____"
],
[
"## Performance measures\n\nseg_mael = []\nseg_masel = []\nseg_nmael = []\n\nfor j in range(testX.shape[-1]):\n\n seg_mael.append(\n np.mean(np.abs(test_rescref.T[j] - test_rescpred.T[j]))\n ) # Mean Absolute Error for NN\n seg_nmael.append(\n np.mean(np.abs(test_rescref.T[j] - testnpredc.T[j]))\n ) # Mean Absolute Error for naive prediction\n if seg_nmael[-1] != 0:\n seg_masel.append(\n seg_mael[-1] / seg_nmael[-1]\n ) # Ratio of the two: Mean Absolute Scaled Error\n else:\n seg_masel.append(np.NaN)\n\nprint(\"Total (ave) MAE for NN: \" + str(np.mean(np.array(seg_mael))))\nprint(\"Total (ave) MAE for naive prediction: \" + str(np.mean(np.array(seg_nmael))))\nprint(\n \"Total (ave) MASE for per-segment NN/naive MAE: \"\n + str(np.nanmean(np.array(seg_masel)))\n)\nprint(\n \"...note that MASE<1 (for a given segment) means that the NN prediction is better than the naive prediction.\"\n)",
"Total (ave) MAE for NN: 4.248436818644403\nTotal (ave) MAE for naive prediction: 5.877064444860809\nTotal (ave) MASE for per-segment NN/naive MAE: 0.7389886237426843\n...note that MASE<1 (for a given segment) means that the NN prediction is better than the naive prediction.\n"
],
[
"# plot violin plot of MAE for naive and NN predictions\nfig, ax = plt.subplots()\n# xl = minsl\n\nax.violinplot(\n list(seg_mael), showmeans=True, showmedians=False, showextrema=False, widths=1.0\n)\n\nax.violinplot(\n list(seg_nmael), showmeans=True, showmedians=False, showextrema=False, widths=1.0\n)\n\nline1 = mlines.Line2D([], [], label=\"NN\")\nline2 = mlines.Line2D([], [], color=\"C1\", label=\"Instantaneous\")\n\nax.set_xlabel(\"Scaled distribution amplitude (after Gaussian convolution)\")\nax.set_ylabel(\"Mean Absolute Error\")\nax.set_title(\"Distribution over segments: NN pred (blue) and naive pred (orange)\")\nplt.legend(handles=(line1, line2), title=\"Prediction Model\", loc=2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Plot of actual and predicted speeds on a sample sensor",
"_____no_output_____"
]
],
[
[
"##all test result visualization\nfig1 = plt.figure(figsize=(15, 8))\n# ax1 = fig1.add_subplot(1,1,1)\na_pred = test_rescpred[:, 1]\na_true = test_rescref[:, 1]\nplt.plot(a_pred, \"r-\", label=\"prediction\")\nplt.plot(a_true, \"b-\", label=\"true\")\nplt.xlabel(\"time\")\nplt.ylabel(\"speed\")\nplt.legend(loc=\"best\", fontsize=10)\nplt.show()",
"_____no_output_____"
]
],
[
[
"<table><tr><td>Run the latest release of this notebook:</td><td><a href=\"https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/time-series/gcn-lstm-time-series.ipynb\" alt=\"Open In Binder\" target=\"_parent\"><img src=\"https://mybinder.org/badge_logo.svg\"/></a></td><td><a href=\"https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/time-series/gcn-lstm-time-series.ipynb\" alt=\"Open In Colab\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a></td></tr></table>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e734977498be288b434a0429b0b596b26d56d098 | 884,925 | ipynb | Jupyter Notebook | notebooks/00_jupyterplot.ipynb | lvwerra/jupyterplot | 612be3a4a5058fafab07865599975e7f6ec8594a | [
"Apache-2.0"
] | 97 | 2020-01-19T17:14:01.000Z | 2022-03-18T11:57:26.000Z | notebooks/00_jupyterplot.ipynb | lvwerra/jupyterplot | 612be3a4a5058fafab07865599975e7f6ec8594a | [
"Apache-2.0"
] | 9 | 2020-03-12T11:45:11.000Z | 2022-02-26T06:24:17.000Z | notebooks/00_jupyterplot.ipynb | lvwerra/jupyterplot | 612be3a4a5058fafab07865599975e7f6ec8594a | [
"Apache-2.0"
] | 9 | 2020-01-29T15:20:56.000Z | 2022-02-09T22:44:35.000Z | 44.244038 | 83,298 | 0.445461 | [
[
[
"# default_exp jupyterplot",
"_____no_output_____"
]
],
[
[
"# jupyterplot\n\n> Create real-time plots in Jupyter Notebooks.",
"_____no_output_____"
]
],
[
[
"# hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
],
[
"# export\nimport IPython\nimport matplotlib.pyplot as plt\n\ntry:\n from lrcurve.plot_learning_curve import PlotLearningCurve\nexcept:\n from lrcurve.plot_learning_curve import PlotLearningCurve\n# so sorry for this hack :( the first import goes through\n# the lrcurve __init__ which triggers a keras/tf import which should\n# be avoided. the second import bypasses this and imports directly\n# from the plot_learning_curve module not requiring keras/tf.\n\n\nclass ProgressPlot(PlotLearningCurve):\n \"\"\"\n Real-time progress plots for Jupyter notebooks.\n \n Parameters\n ----------\n plot_names : list of str, optional, default: ``['plot']``\n Labels for plots. Length also determines number of plots.\n \n line_names: list of str, optional, default: ``['line-1']``\n Labels for lines. Length also determines number of lines per plot.\n \n line_colors: list of str, optional, default: ``None``\n Color cycle for lines in hex format. If ``None``\n the standard matplotlib color cycle is used.\n \n x_lim: list, optional, default: ``[None, None]``\n List with ``[x_min, x_max]``. If value is ``None`` the \n axes on that side is dynamically adjusted.\n \n y_lim: list, optional, default: ``[None, None]``\n List with ``[y_min, y_max]``. If value is ``None`` the \n axes on that side is dynamically adjusted. \n \n x_label='iteration': str, optional, default: ``'iteration'``\n Label for the x-axis. Default is ``'iteration'``\n \n x_iterator: boolean, optional, default: ``True``\n If flag is ``True`` an internal iterator is used as\n x values for the plot. If ``False`` the update function\n requires an x value.\n \n height: int, optional, default: ``None``\n The height in pixels of the plot (default None). The default\n behavior is to use 200px per facet and an additional 90px for\n the x-axis and legend.\n \n width: int, optional, default: ``600``\n The width in pixels of the plot (default 600).\n \n display_fn: callable, optional, default: ``IPython.display.display``\n To display HTML or JavaScript in a notebook with an IPython\n backend, `IPython.display.display` is called. The called function\n can be overwritten by setting this argument (mostly useful for\n internal testing).\n \n debug: boolean, optional, default: ``False``\n Depending on the notebook, a JavaScript evaluation does not provide\n a stack trace in the developer console. Setting this to `true` works\n around that by injecting `<script>` tags instead.\n \n \n \"\"\"\n\n def __init__(\n self,\n plot_names=[\"plot\"],\n line_names=[\"line-1\"],\n line_colors=None,\n x_lim=[None, None],\n y_lim=[None, None],\n x_label=\"iteration\",\n x_iterator=True,\n height=None,\n width=600,\n display_fn=IPython.display.display,\n debug=False,\n ):\n\n self.width = width\n self.height = height\n self.display_fn = display_fn\n self.debug = debug\n self._plot_is_setup = False\n self._plots = plot_names\n self.line_names = line_names\n self.line_colors = line_colors\n self.x_lim = x_lim\n self.y_lim = y_lim\n self.x_label = x_label\n self.iterator = 0\n\n if isinstance(y_lim[0], list):\n if len(y_lim)==len(plot_names):\n self.y_lim = y_lim\n else:\n raise ValueError(f\"Unequal number of y limits and plots ({len(y_lim)} and {len(plot_names)}).\")\n else:\n self.y_lim = [y_lim] * len(plot_names)\n \n if not line_colors:\n line_colors = plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"]\n\n # setup color cycle from list of line colors\n self.line_colors = [\n line_colors[i % len(line_colors)] for i in range(len(line_names))\n ]\n\n if x_iterator:\n self.update = self._update_with_iter\n else:\n self.update = self._update_with_x\n\n self._setup_plot()\n\n def _update_with_iter(self, y):\n \"\"\"\n Update plot with internal iterator.\n \n Parameters\n ----------\n y: float, list, dict\n y-value of data update. If single plot with\n single line a float can be passed. Otherwise\n a list of lists for each plot and line or a\n dict of dicts with the plot and line names must\n be passed.\n \"\"\"\n self._update_with_x(self.iterator, y)\n self.iterator += 1\n\n def _update_with_x(self, x, y):\n \"\"\"\n Update plot with external x-values.\n \n Parameters\n ----------\n x: int, float\n x-value of data update.\n y: float, list, dict\n y-value of data update. If single plot with\n single line a float can be passed. Otherwise\n a list of lists for each plot and line or a\n dict of dicts with the plot and line names must\n be passed.\n \"\"\"\n y = self._parse_y(y)\n self.append(x, y)\n self.draw()\n\n def _parse_y(self, y):\n \"\"\"Parse y-data to dict for js.\"\"\"\n if isinstance(y, dict):\n return y\n elif isinstance(y, list):\n return self._y_list_to_dict(y)\n elif isinstance(y, (int, float)):\n return self._y_scalar_to_dict(y)\n else:\n raise ValueError(\n \"Not supported data type for update. Should be one of dict/list/float.\"\n )\n\n def _y_list_to_dict(self, y):\n \"\"\"Parse y-data in list to dict for js.\"\"\"\n if not (len(y) == len(self._plots)):\n raise ValueError(\"Number of plot updates not equal to number of plots!\")\n if not all(isinstance(yi, list) for yi in y):\n raise ValueError(\"Line updates not of type list!\")\n if not all(len(yi) == len(self.line_names) for yi in y):\n raise ValueError(\n \"Number of line update values not equal to number of lines!\"\n )\n\n y_dict = {\n plot: {line: y_ij for line, y_ij in zip(self.line_names, y_i)}\n for plot, y_i in zip(self._plots, y)\n }\n return y_dict\n\n def _y_scalar_to_dict(self, y):\n \"\"\"Parse y-data int/or float to dict for js.\"\"\"\n if not (len(self._plots) == 1 and len(self.line_names) == 1):\n raise ValueError(\n \"Can only update with int/float with one plot and one line.\"\n )\n\n y_dict = {self._plots[0]: {self.line_names[0]: y}}\n return y_dict\n\n def _setup_plot(self):\n \"\"\"Setup progress plot by calling initializing PlotLearningCurve class.\"\"\"\n \n line_config = {\n name: {\"name\": name, \"color\": color}\n for name, color in zip(self.line_names, self.line_colors)\n }\n facet_config = {\n name: {\"name\": name, \"limit\": y_lim} for name, y_lim in zip(self._plots, self.y_lim)\n }\n xaxis_config = {\"name\": self.x_label, \"limit\": self.x_lim}\n\n super().__init__(\n height=self.height,\n width=self.width,\n line_config=line_config,\n facet_config=facet_config,\n xaxis_config=xaxis_config,\n display_fn=self.display_fn,\n debug=self.debug,\n )",
"_____no_output_____"
]
],
[
[
"## Example",
"_____no_output_____"
]
],
[
[
"from jupyterplot import ProgressPlot\n\npp = ProgressPlot()\nfor i in range(100):\n pp.update(1 / (i + 1))\npp.finalize()",
"_____no_output_____"
],
[
"import numpy as np\n\npp = ProgressPlot(x_iterator=False, x_lim=[-1, 1], y_lim=[-1, 1])\nfor i in range(1001):\n pp.update(np.sin(2 * np.pi * i / 1000), np.cos(2 * np.pi * i / 1000))\npp.finalize()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e73497f0794ffe38df69fa11ef9bef074126ac97 | 50,947 | ipynb | Jupyter Notebook | 100_Numpy_exercises.ipynb | ShuoGH/numpy-100 | c8eed4cbdef756bbfe9b56312bd8ec15301d5005 | [
"MIT"
] | null | null | null | 100_Numpy_exercises.ipynb | ShuoGH/numpy-100 | c8eed4cbdef756bbfe9b56312bd8ec15301d5005 | [
"MIT"
] | null | null | null | 100_Numpy_exercises.ipynb | ShuoGH/numpy-100 | c8eed4cbdef756bbfe9b56312bd8ec15301d5005 | [
"MIT"
] | null | null | null | 23.488704 | 289 | 0.487566 | [
[
[
"# 100 numpy exercises\n\nThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.\n\n\nIf you find an error or think you've a better way to solve some of them, feel free to open an issue at <https://github.com/rougier/numpy-100>",
"_____no_output_____"
],
[
"#### 1. Import the numpy package under the name `np` (★☆☆)",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"#### 2. Print the numpy version and the configuration (★☆☆)",
"_____no_output_____"
]
],
[
[
"print(np.__version__)\nnp.show_config()",
"_____no_output_____"
]
],
[
[
"#### 3. Create a null vector of size 10 (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros(10)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 4. How to find the memory size of any array (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros((10,10))\nprint(\"%d bytes\" % (Z.size * Z.itemsize))",
"_____no_output_____"
]
],
[
[
"#### 5. How to get the documentation of the numpy add function from the command line? (★☆☆)",
"_____no_output_____"
]
],
[
[
"%run `python -c \"import numpy; numpy.info(numpy.add)\"`",
"_____no_output_____"
]
],
[
[
"#### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros(10)\nZ[4] = 1\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 7. Create a vector with values ranging from 10 to 49 (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.arange(10,50)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 8. Reverse a vector (first element becomes last) (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.arange(50)\nZ = Z[::-1]\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.arange(9).reshape(3,3)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 10. Find indices of non-zero elements from \\[1,2,0,0,4,0\\] (★☆☆)",
"_____no_output_____"
]
],
[
[
"nz = np.nonzero([1,2,0,0,4,0])\nprint(nz)",
"_____no_output_____"
]
],
[
[
"#### 11. Create a 3x3 identity matrix (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.eye(3)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 12. Create a 3x3x3 array with random values (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random((3,3,3))\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random((10,10))\nZmin, Zmax = Z.min(), Z.max()\nprint(Zmin, Zmax)",
"_____no_output_____"
]
],
[
[
"#### 14. Create a random vector of size 30 and find the mean value (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random(30)\nm = Z.mean()\nprint(m)",
"_____no_output_____"
]
],
[
[
"#### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.ones((10,10))\nZ[1:-1,1:-1] = 0\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 16. How to add a border (filled with 0's) around an existing array? (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.ones((5,5))\nZ = np.pad(Z, pad_width=1, mode='constant', constant_values=0)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 17. What is the result of the following expression? (★☆☆)",
"_____no_output_____"
]
],
[
[
"print(0 * np.nan)\nprint(np.nan == np.nan)\nprint(np.inf > np.nan)\nprint(np.nan - np.nan)\nprint(0.3 == 3 * 0.1)",
"_____no_output_____"
]
],
[
[
"#### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.diag(1+np.arange(4),k=-1)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros((8,8),dtype=int)\nZ[1::2,::2] = 1\nZ[::2,1::2] = 1\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?",
"_____no_output_____"
]
],
[
[
"print(np.unravel_index(100,(6,7,8)))",
"_____no_output_____"
]
],
[
[
"#### 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.tile( np.array([[0,1],[1,0]]), (4,4))\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 22. Normalize a 5x5 random matrix (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random((5,5))\nZmax, Zmin = Z.max(), Z.min()\nZ = (Z - Zmin)/(Zmax - Zmin)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)",
"_____no_output_____"
]
],
[
[
"color = np.dtype([(\"r\", np.ubyte, 1),\n (\"g\", np.ubyte, 1),\n (\"b\", np.ubyte, 1),\n (\"a\", np.ubyte, 1)])",
"_____no_output_____"
]
],
[
[
"#### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z = np.dot(np.ones((5,3)), np.ones((3,2)))\nprint(Z)\n\n# Alternative solution, in Python 3.5 and above\nZ = np.ones((5,3)) @ np.ones((3,2))",
"_____no_output_____"
]
],
[
[
"#### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)",
"_____no_output_____"
]
],
[
[
"# Author: Evgeni Burovski\n\nZ = np.arange(11)\nZ[(3 < Z) & (Z <= 8)] *= -1\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 26. What is the output of the following script? (★☆☆)",
"_____no_output_____"
]
],
[
[
"# Author: Jake VanderPlas\n\nprint(sum(range(5),-1))\nfrom numpy import *\nprint(sum(range(5),-1))",
"_____no_output_____"
]
],
[
[
"#### 27. Consider an integer vector Z, which of these expressions are legal? (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z**Z\n2 << Z >> 2\nZ <- Z\n1j*Z\nZ/1/1\nZ<Z>Z",
"_____no_output_____"
]
],
[
[
"#### 28. What are the result of the following expressions?",
"_____no_output_____"
]
],
[
[
"print(np.array(0) / np.array(0))\nprint(np.array(0) // np.array(0))\nprint(np.array([np.nan]).astype(int).astype(float))",
"_____no_output_____"
]
],
[
[
"#### 29. How to round away from zero a float array ? (★☆☆)",
"_____no_output_____"
]
],
[
[
"# Author: Charles R Harris\n\nZ = np.random.uniform(-10,+10,10)\nprint (np.copysign(np.ceil(np.abs(Z)), Z))",
"_____no_output_____"
]
],
[
[
"#### 30. How to find common values between two arrays? (★☆☆)",
"_____no_output_____"
]
],
[
[
"Z1 = np.random.randint(0,10,10)\nZ2 = np.random.randint(0,10,10)\nprint(np.intersect1d(Z1,Z2))",
"_____no_output_____"
]
],
[
[
"#### 31. How to ignore all numpy warnings (not recommended)? (★☆☆)",
"_____no_output_____"
]
],
[
[
"# Suicide mode on\ndefaults = np.seterr(all=\"ignore\")\nZ = np.ones(1) / 0\n\n# Back to sanity\n_ = np.seterr(**defaults)\n\nAn equivalent way, with a context manager:\n\nwith np.errstate(divide='ignore'):\n Z = np.ones(1) / 0",
"_____no_output_____"
]
],
[
[
"#### 32. Is the following expressions true? (★☆☆)",
"_____no_output_____"
]
],
[
[
"np.sqrt(-1) == np.emath.sqrt(-1)",
"_____no_output_____"
]
],
[
[
"#### 33. How to get the dates of yesterday, today and tomorrow? (★☆☆)",
"_____no_output_____"
]
],
[
[
"yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D')\ntoday = np.datetime64('today', 'D')\ntomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D')",
"_____no_output_____"
]
],
[
[
"#### 34. How to get all the dates corresponding to the month of July 2016? (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.arange('2016-07', '2016-08', dtype='datetime64[D]')\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 35. How to compute ((A+B)\\*(-A/2)) in place (without copy)? (★★☆)",
"_____no_output_____"
]
],
[
[
"A = np.ones(3)*1\nB = np.ones(3)*2\nC = np.ones(3)*3\nnp.add(A,B,out=B)\nnp.divide(A,2,out=A)\nnp.negative(A,out=A)\nnp.multiply(A,B,out=A)",
"_____no_output_____"
]
],
[
[
"#### 36. Extract the integer part of a random array using 5 different methods (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.uniform(0,10,10)\n\nprint (Z - Z%1)\nprint (np.floor(Z))\nprint (np.ceil(Z)-1)\nprint (Z.astype(int))\nprint (np.trunc(Z))",
"_____no_output_____"
]
],
[
[
"#### 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros((5,5))\nZ += np.arange(5)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)",
"_____no_output_____"
]
],
[
[
"def generate():\n for x in range(10):\n yield x\nZ = np.fromiter(generate(),dtype=float,count=-1)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.linspace(0,1,11,endpoint=False)[1:]\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 40. Create a random vector of size 10 and sort it (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random(10)\nZ.sort()\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 41. How to sum a small array faster than np.sum? (★★☆)",
"_____no_output_____"
]
],
[
[
"# Author: Evgeni Burovski\n\nZ = np.arange(10)\nnp.add.reduce(Z)",
"_____no_output_____"
]
],
[
[
"#### 42. Consider two random array A and B, check if they are equal (★★☆)",
"_____no_output_____"
]
],
[
[
"A = np.random.randint(0,2,5)\nB = np.random.randint(0,2,5)\n\n# Assuming identical shape of the arrays and a tolerance for the comparison of values\nequal = np.allclose(A,B)\nprint(equal)\n\n# Checking both the shape and the element values, no tolerance (values have to be exactly equal)\nequal = np.array_equal(A,B)\nprint(equal)",
"_____no_output_____"
]
],
[
[
"#### 43. Make an array immutable (read-only) (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros(10)\nZ.flags.writeable = False\nZ[0] = 1",
"_____no_output_____"
]
],
[
[
"#### 44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random((10,2))\nX,Y = Z[:,0], Z[:,1]\nR = np.sqrt(X**2+Y**2)\nT = np.arctan2(Y,X)\nprint(R)\nprint(T)",
"_____no_output_____"
]
],
[
[
"#### 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random(10)\nZ[Z.argmax()] = 0\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 46. Create a structured array with `x` and `y` coordinates covering the \\[0,1\\]x\\[0,1\\] area (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros((5,5), [('x',float),('y',float)])\nZ['x'], Z['y'] = np.meshgrid(np.linspace(0,1,5),\n np.linspace(0,1,5))\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))",
"_____no_output_____"
]
],
[
[
"# Author: Evgeni Burovski\n\nX = np.arange(8)\nY = X + 0.5\nC = 1.0 / np.subtract.outer(X, Y)\nprint(np.linalg.det(C))",
"_____no_output_____"
]
],
[
[
"#### 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)",
"_____no_output_____"
]
],
[
[
"for dtype in [np.int8, np.int32, np.int64]:\n print(np.iinfo(dtype).min)\n print(np.iinfo(dtype).max)\nfor dtype in [np.float32, np.float64]:\n print(np.finfo(dtype).min)\n print(np.finfo(dtype).max)\n print(np.finfo(dtype).eps)",
"_____no_output_____"
]
],
[
[
"#### 49. How to print all the values of an array? (★★☆)",
"_____no_output_____"
]
],
[
[
"np.set_printoptions(threshold=np.nan)\nZ = np.zeros((16,16))\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 50. How to find the closest value (to a given scalar) in a vector? (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.arange(100)\nv = np.random.uniform(0,100)\nindex = (np.abs(Z-v)).argmin()\nprint(Z[index])",
"_____no_output_____"
]
],
[
[
"#### 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.zeros(10, [ ('position', [ ('x', float, 1),\n ('y', float, 1)]),\n ('color', [ ('r', float, 1),\n ('g', float, 1),\n ('b', float, 1)])])\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.random((10,2))\nX,Y = np.atleast_2d(Z[:,0], Z[:,1])\nD = np.sqrt( (X-X.T)**2 + (Y-Y.T)**2)\nprint(D)\n\n# Much faster with scipy\nimport scipy\n# Thanks Gavin Heverly-Coulson (#issue 1)\nimport scipy.spatial\n\nZ = np.random.random((10,2))\nD = scipy.spatial.distance.cdist(Z,Z)\nprint(D)",
"_____no_output_____"
]
],
[
[
"#### 53. How to convert a float (32 bits) array into an integer (32 bits) in place?",
"_____no_output_____"
]
],
[
[
"Z = np.arange(10, dtype=np.float32)\nZ = Z.astype(np.int32, copy=False)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 54. How to read the following file? (★★☆)",
"_____no_output_____"
]
],
[
[
"from io import StringIO\n\n# Fake file \ns = StringIO(\"\"\"1, 2, 3, 4, 5\\n\n 6, , , 7, 8\\n\n , , 9,10,11\\n\"\"\")\nZ = np.genfromtxt(s, delimiter=\",\", dtype=np.int)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 55. What is the equivalent of enumerate for numpy arrays? (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.arange(9).reshape(3,3)\nfor index, value in np.ndenumerate(Z):\n print(index, value)\nfor index in np.ndindex(Z.shape):\n print(index, Z[index])",
"_____no_output_____"
]
],
[
[
"#### 56. Generate a generic 2D Gaussian-like array (★★☆)",
"_____no_output_____"
]
],
[
[
"X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))\nD = np.sqrt(X*X+Y*Y)\nsigma, mu = 1.0, 0.0\nG = np.exp(-( (D-mu)**2 / ( 2.0 * sigma**2 ) ) )\nprint(G)",
"_____no_output_____"
]
],
[
[
"#### 57. How to randomly place p elements in a 2D array? (★★☆)",
"_____no_output_____"
]
],
[
[
"# Author: Divakar\n\nn = 10\np = 3\nZ = np.zeros((n,n))\nnp.put(Z, np.random.choice(range(n*n), p, replace=False),1)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 58. Subtract the mean of each row of a matrix (★★☆)",
"_____no_output_____"
]
],
[
[
"# Author: Warren Weckesser\n\nX = np.random.rand(5, 10)\n\n# Recent versions of numpy\nY = X - X.mean(axis=1, keepdims=True)\n\n# Older versions of numpy\nY = X - X.mean(axis=1).reshape(-1, 1)\n\nprint(Y)",
"_____no_output_____"
]
],
[
[
"#### 59. How to sort an array by the nth column? (★★☆)",
"_____no_output_____"
]
],
[
[
"# Author: Steve Tjoa\n\nZ = np.random.randint(0,10,(3,3))\nprint(Z)\nprint(Z[Z[:,1].argsort()])",
"_____no_output_____"
]
],
[
[
"#### 60. How to tell if a given 2D array has null columns? (★★☆)",
"_____no_output_____"
]
],
[
[
"# Author: Warren Weckesser\n\nZ = np.random.randint(0,3,(3,10))\nprint((~Z.any(axis=0)).any())",
"_____no_output_____"
]
],
[
[
"#### 61. Find the nearest value from a given value in an array (★★☆)",
"_____no_output_____"
]
],
[
[
"Z = np.random.uniform(0,1,10)\nz = 0.5\nm = Z.flat[np.abs(Z - z).argmin()]\nprint(m)",
"_____no_output_____"
]
],
[
[
"#### 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)",
"_____no_output_____"
]
],
[
[
"A = np.arange(3).reshape(3,1)\nB = np.arange(3).reshape(1,3)\nit = np.nditer([A,B,None])\nfor x,y,z in it: z[...] = x + y\nprint(it.operands[2])",
"_____no_output_____"
]
],
[
[
"#### 63. Create an array class that has a name attribute (★★☆)",
"_____no_output_____"
]
],
[
[
"class NamedArray(np.ndarray):\n def __new__(cls, array, name=\"no name\"):\n obj = np.asarray(array).view(cls)\n obj.name = name\n return obj\n def __array_finalize__(self, obj):\n if obj is None: return\n self.info = getattr(obj, 'name', \"no name\")\n\nZ = NamedArray(np.arange(10), \"range_10\")\nprint (Z.name)",
"_____no_output_____"
]
],
[
[
"#### 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Brett Olsen\n\nZ = np.ones(10)\nI = np.random.randint(0,len(Z),20)\nZ += np.bincount(I, minlength=len(Z))\nprint(Z)\n\n# Another solution\n# Author: Bartosz Telenczuk\nnp.add.at(Z, I, 1)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Alan G Isaac\n\nX = [1,2,3,4,5,6]\nI = [1,3,9,3,4,1]\nF = np.bincount(I,X)\nprint(F)",
"_____no_output_____"
]
],
[
[
"#### 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Nadav Horesh\n\nw,h = 16,16\nI = np.random.randint(0,2,(h,w,3)).astype(np.ubyte)\n#Note that we should compute 256*256 first. \n#Otherwise numpy will only promote F.dtype to 'uint16' and overfolw will occur\nF = I[...,0]*(256*256) + I[...,1]*256 +I[...,2]\nn = len(np.unique(F))\nprint(n)",
"_____no_output_____"
]
],
[
[
"#### 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)",
"_____no_output_____"
]
],
[
[
"A = np.random.randint(0,10,(3,4,3,4))\n# solution by passing a tuple of axes (introduced in numpy 1.7.0)\nsum = A.sum(axis=(-2,-1))\nprint(sum)\n# solution by flattening the last two dimensions into one\n# (useful for functions that don't accept tuples for axis argument)\nsum = A.reshape(A.shape[:-2] + (-1,)).sum(axis=-1)\nprint(sum)",
"_____no_output_____"
]
],
[
[
"#### 68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Jaime Fernández del Río\n\nD = np.random.uniform(0,1,100)\nS = np.random.randint(0,10,100)\nD_sums = np.bincount(S, weights=D)\nD_counts = np.bincount(S)\nD_means = D_sums / D_counts\nprint(D_means)\n\n# Pandas solution as a reference due to more intuitive code\nimport pandas as pd\nprint(pd.Series(D).groupby(S).mean())",
"_____no_output_____"
]
],
[
[
"#### 69. How to get the diagonal of a dot product? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Mathieu Blondel\n\nA = np.random.uniform(0,1,(5,5))\nB = np.random.uniform(0,1,(5,5))\n\n# Slow version \nnp.diag(np.dot(A, B))\n\n# Fast version\nnp.sum(A * B.T, axis=1)\n\n# Faster version\nnp.einsum(\"ij,ji->i\", A, B)",
"_____no_output_____"
]
],
[
[
"#### 70. Consider the vector \\[1, 2, 3, 4, 5\\], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Warren Weckesser\n\nZ = np.array([1,2,3,4,5])\nnz = 3\nZ0 = np.zeros(len(Z) + (len(Z)-1)*(nz))\nZ0[::nz+1] = Z\nprint(Z0)",
"_____no_output_____"
]
],
[
[
"#### 71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)",
"_____no_output_____"
]
],
[
[
"A = np.ones((5,5,3))\nB = 2*np.ones((5,5))\nprint(A * B[:,:,None])",
"_____no_output_____"
]
],
[
[
"#### 72. How to swap two rows of an array? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Eelco Hoogendoorn\n\nA = np.arange(25).reshape(5,5)\nA[[0,1]] = A[[1,0]]\nprint(A)",
"_____no_output_____"
]
],
[
[
"#### 73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Nicolas P. Rougier\n\nfaces = np.random.randint(0,100,(10,3))\nF = np.roll(faces.repeat(2,axis=1),-1,axis=1)\nF = F.reshape(len(F)*3,2)\nF = np.sort(F,axis=1)\nG = F.view( dtype=[('p0',F.dtype),('p1',F.dtype)] )\nG = np.unique(G)\nprint(G)",
"_____no_output_____"
]
],
[
[
"#### 74. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Jaime Fernández del Río\n\nC = np.bincount([1,1,2,3,4,4,6])\nA = np.repeat(np.arange(len(C)), C)\nprint(A)",
"_____no_output_____"
]
],
[
[
"#### 75. How to compute averages using a sliding window over an array? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Jaime Fernández del Río\n\ndef moving_average(a, n=3) :\n ret = np.cumsum(a, dtype=float)\n ret[n:] = ret[n:] - ret[:-n]\n return ret[n - 1:] / n\nZ = np.arange(20)\nprint(moving_average(Z, n=3))",
"_____no_output_____"
]
],
[
[
"#### 76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z\\[0\\],Z\\[1\\],Z\\[2\\]) and each subsequent row is shifted by 1 (last row should be (Z\\[-3\\],Z\\[-2\\],Z\\[-1\\]) (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Joe Kington / Erik Rigtorp\nfrom numpy.lib import stride_tricks\n\ndef rolling(a, window):\n shape = (a.size - window + 1, window)\n strides = (a.itemsize, a.itemsize)\n return stride_tricks.as_strided(a, shape=shape, strides=strides)\nZ = rolling(np.arange(10), 3)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 77. How to negate a boolean, or to change the sign of a float inplace? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Nathaniel J. Smith\n\nZ = np.random.randint(0,2,100)\nnp.logical_not(Z, out=Z)\n\nZ = np.random.uniform(-1.0,1.0,100)\nnp.negative(Z, out=Z)",
"_____no_output_____"
]
],
[
[
"#### 78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0\\[i\\],P1\\[i\\])? (★★★)",
"_____no_output_____"
]
],
[
[
"def distance(P0, P1, p):\n T = P1 - P0\n L = (T**2).sum(axis=1)\n U = -((P0[:,0]-p[...,0])*T[:,0] + (P0[:,1]-p[...,1])*T[:,1]) / L\n U = U.reshape(len(U),1)\n D = P0 + U*T - p\n return np.sqrt((D**2).sum(axis=1))\n\nP0 = np.random.uniform(-10,10,(10,2))\nP1 = np.random.uniform(-10,10,(10,2))\np = np.random.uniform(-10,10,( 1,2))\nprint(distance(P0, P1, p))",
"_____no_output_____"
]
],
[
[
"#### 79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P\\[j\\]) to each line i (P0\\[i\\],P1\\[i\\])? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Italmassov Kuanysh\n\n# based on distance function from previous question\nP0 = np.random.uniform(-10, 10, (10,2))\nP1 = np.random.uniform(-10,10,(10,2))\np = np.random.uniform(-10, 10, (10,2))\nprint(np.array([distance(P0,P1,p_i) for p_i in p]))",
"_____no_output_____"
]
],
[
[
"#### 80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Nicolas Rougier\n\nZ = np.random.randint(0,10,(10,10))\nshape = (5,5)\nfill = 0\nposition = (1,1)\n\nR = np.ones(shape, dtype=Z.dtype)*fill\nP = np.array(list(position)).astype(int)\nRs = np.array(list(R.shape)).astype(int)\nZs = np.array(list(Z.shape)).astype(int)\n\nR_start = np.zeros((len(shape),)).astype(int)\nR_stop = np.array(list(shape)).astype(int)\nZ_start = (P-Rs//2)\nZ_stop = (P+Rs//2)+Rs%2\n\nR_start = (R_start - np.minimum(Z_start,0)).tolist()\nZ_start = (np.maximum(Z_start,0)).tolist()\nR_stop = np.maximum(R_start, (R_stop - np.maximum(Z_stop-Zs,0))).tolist()\nZ_stop = (np.minimum(Z_stop,Zs)).tolist()\n\nr = [slice(start,stop) for start,stop in zip(R_start,R_stop)]\nz = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]\nR[r] = Z[z]\nprint(Z)\nprint(R)",
"_____no_output_____"
]
],
[
[
"#### 81. Consider an array Z = \\[1,2,3,4,5,6,7,8,9,10,11,12,13,14\\], how to generate an array R = \\[\\[1,2,3,4\\], \\[2,3,4,5\\], \\[3,4,5,6\\], ..., \\[11,12,13,14\\]\\]? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Stefan van der Walt\n\nZ = np.arange(1,15,dtype=np.uint32)\nR = stride_tricks.as_strided(Z,(11,4),(4,4))\nprint(R)",
"_____no_output_____"
]
],
[
[
"#### 82. Compute a matrix rank (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Stefan van der Walt\n\nZ = np.random.uniform(0,1,(10,10))\nU, S, V = np.linalg.svd(Z) # Singular Value Decomposition\nrank = np.sum(S > 1e-10)\nprint(rank)",
"_____no_output_____"
]
],
[
[
"#### 83. How to find the most frequent value in an array?",
"_____no_output_____"
]
],
[
[
"Z = np.random.randint(0,10,50)\nprint(np.bincount(Z).argmax())",
"_____no_output_____"
]
],
[
[
"#### 84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Chris Barker\n\nZ = np.random.randint(0,5,(10,10))\nn = 3\ni = 1 + (Z.shape[0]-3)\nj = 1 + (Z.shape[1]-3)\nC = stride_tricks.as_strided(Z, shape=(i, j, n, n), strides=Z.strides + Z.strides)\nprint(C)",
"_____no_output_____"
]
],
[
[
"#### 85. Create a 2D array subclass such that Z\\[i,j\\] == Z\\[j,i\\] (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Eric O. Lebigot\n# Note: only works for 2d array and value setting using indices\n\nclass Symetric(np.ndarray):\n def __setitem__(self, index, value):\n i,j = index\n super(Symetric, self).__setitem__((i,j), value)\n super(Symetric, self).__setitem__((j,i), value)\n\ndef symetric(Z):\n return np.asarray(Z + Z.T - np.diag(Z.diagonal())).view(Symetric)\n\nS = symetric(np.random.randint(0,10,(5,5)))\nS[2,3] = 42\nprint(S)",
"_____no_output_____"
]
],
[
[
"#### 86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Stefan van der Walt\n\np, n = 10, 20\nM = np.ones((p,n,n))\nV = np.ones((p,n,1))\nS = np.tensordot(M, V, axes=[[0, 2], [0, 1]])\nprint(S)\n\n# It works, because:\n# M is (p,n,n)\n# V is (p,n,1)\n# Thus, summing over the paired axes 0 and 0 (of M and V independently),\n# and 2 and 1, to remain with a (n,1) vector.",
"_____no_output_____"
]
],
[
[
"#### 87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Robert Kern\n\nZ = np.ones((16,16))\nk = 4\nS = np.add.reduceat(np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0),\n np.arange(0, Z.shape[1], k), axis=1)\nprint(S)",
"_____no_output_____"
]
],
[
[
"#### 88. How to implement the Game of Life using numpy arrays? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Nicolas Rougier\n\ndef iterate(Z):\n # Count neighbours\n N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] +\n Z[1:-1,0:-2] + Z[1:-1,2:] +\n Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:])\n\n # Apply rules\n birth = (N==3) & (Z[1:-1,1:-1]==0)\n survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1)\n Z[...] = 0\n Z[1:-1,1:-1][birth | survive] = 1\n return Z\n\nZ = np.random.randint(0,2,(50,50))\nfor i in range(100): Z = iterate(Z)\nprint(Z)",
"_____no_output_____"
]
],
[
[
"#### 89. How to get the n largest values of an array (★★★)",
"_____no_output_____"
]
],
[
[
"Z = np.arange(10000)\nnp.random.shuffle(Z)\nn = 5\n\n# Slow\nprint (Z[np.argsort(Z)[-n:]])\n\n# Fast\nprint (Z[np.argpartition(-Z,n)[:n]])",
"_____no_output_____"
]
],
[
[
"#### 90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Stefan Van der Walt\n\ndef cartesian(arrays):\n arrays = [np.asarray(a) for a in arrays]\n shape = (len(x) for x in arrays)\n\n ix = np.indices(shape, dtype=int)\n ix = ix.reshape(len(arrays), -1).T\n\n for n, arr in enumerate(arrays):\n ix[:, n] = arrays[n][ix[:, n]]\n\n return ix\n\nprint (cartesian(([1, 2, 3], [4, 5], [6, 7])))",
"_____no_output_____"
]
],
[
[
"#### 91. How to create a record array from a regular array? (★★★)",
"_____no_output_____"
]
],
[
[
"Z = np.array([(\"Hello\", 2.5, 3),\n (\"World\", 3.6, 2)])\nR = np.core.records.fromarrays(Z.T, \n names='col1, col2, col3',\n formats = 'S8, f8, i8')\nprint(R)",
"_____no_output_____"
]
],
[
[
"#### 92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Ryan G.\n\nx = np.random.rand(5e7)\n\n%timeit np.power(x,3)\n%timeit x*x*x\n%timeit np.einsum('i,i,i->i',x,x,x)",
"_____no_output_____"
]
],
[
[
"#### 93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Gabe Schwartz\n\nA = np.random.randint(0,5,(8,3))\nB = np.random.randint(0,5,(2,2))\n\nC = (A[..., np.newaxis, np.newaxis] == B)\nrows = np.where(C.any((3,1)).all(1))[0]\nprint(rows)",
"_____no_output_____"
]
],
[
[
"#### 94. Considering a 10x3 matrix, extract rows with unequal values (e.g. \\[2,2,3\\]) (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Robert Kern\n\nZ = np.random.randint(0,5,(10,3))\nprint(Z)\n# solution for arrays of all dtypes (including string arrays and record arrays)\nE = np.all(Z[:,1:] == Z[:,:-1], axis=1)\nU = Z[~E]\nprint(U)\n# soluiton for numerical arrays only, will work for any number of columns in Z\nU = Z[Z.max(axis=1) != Z.min(axis=1),:]\nprint(U)",
"_____no_output_____"
]
],
[
[
"#### 95. Convert a vector of ints into a matrix binary representation (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Warren Weckesser\n\nI = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128])\nB = ((I.reshape(-1,1) & (2**np.arange(8))) != 0).astype(int)\nprint(B[:,::-1])\n\n# Author: Daniel T. McDonald\n\nI = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=np.uint8)\nprint(np.unpackbits(I[:, np.newaxis], axis=1))",
"_____no_output_____"
]
],
[
[
"#### 96. Given a two dimensional array, how to extract unique rows? (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Jaime Fernández del Río\n\nZ = np.random.randint(0,2,(6,3))\nT = np.ascontiguousarray(Z).view(np.dtype((np.void, Z.dtype.itemsize * Z.shape[1])))\n_, idx = np.unique(T, return_index=True)\nuZ = Z[idx]\nprint(uZ)",
"_____no_output_____"
]
],
[
[
"#### 97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Alex Riley\n# Make sure to read: http://ajcr.net/Basic-guide-to-einsum/\n\nA = np.random.uniform(0,1,10)\nB = np.random.uniform(0,1,10)\n\nnp.einsum('i->', A) # np.sum(A)\nnp.einsum('i,i->i', A, B) # A * B\nnp.einsum('i,i', A, B) # np.inner(A, B)\nnp.einsum('i,j->ij', A, B) # np.outer(A, B)",
"_____no_output_____"
]
],
[
[
"#### 98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?",
"_____no_output_____"
]
],
[
[
"# Author: Bas Swinckels\n\nphi = np.arange(0, 10*np.pi, 0.1)\na = 1\nx = a*phi*np.cos(phi)\ny = a*phi*np.sin(phi)\n\ndr = (np.diff(x)**2 + np.diff(y)**2)**.5 # segment lengths\nr = np.zeros_like(x)\nr[1:] = np.cumsum(dr) # integrate path\nr_int = np.linspace(0, r.max(), 200) # regular spaced path\nx_int = np.interp(r_int, r, x) # integrate path\ny_int = np.interp(r_int, r, y)",
"_____no_output_____"
]
],
[
[
"#### 99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Evgeni Burovski\n\nX = np.asarray([[1.0, 0.0, 3.0, 8.0],\n [2.0, 0.0, 1.0, 1.0],\n [1.5, 2.5, 1.0, 0.0]])\nn = 4\nM = np.logical_and.reduce(np.mod(X, 1) == 0, axis=-1)\nM &= (X.sum(axis=-1) == n)\nprint(X[M])",
"_____no_output_____"
]
],
[
[
"#### 100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)",
"_____no_output_____"
]
],
[
[
"# Author: Jessica B. Hamrick\n\nX = np.random.randn(100) # random 1D array\nN = 1000 # number of bootstrap samples\nidx = np.random.randint(0, X.size, (N, X.size))\nmeans = X[idx].mean(axis=1)\nconfint = np.percentile(means, [2.5, 97.5])\nprint(confint)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e734984db358fd6e520cb4cb3e5d6f7353ceaf78 | 36,772 | ipynb | Jupyter Notebook | A3_notebook.ipynb | MrFizban/SQLAssignment3 | fd39d197fd540e2d69bc9993aa3d4ba0e9fe8251 | [
"MIT"
] | null | null | null | A3_notebook.ipynb | MrFizban/SQLAssignment3 | fd39d197fd540e2d69bc9993aa3d4ba0e9fe8251 | [
"MIT"
] | null | null | null | A3_notebook.ipynb | MrFizban/SQLAssignment3 | fd39d197fd540e2d69bc9993aa3d4ba0e9fe8251 | [
"MIT"
] | null | null | null | 35.666343 | 564 | 0.377108 | [
[
[
"import psycopg2\nimport random\nimport sys\nfrom time import time_ns\nTABLE_LENGTH = 100",
"_____no_output_____"
],
[
"connection = psycopg2.connect(dbname='db_057', user='db_057', host='sci-didattica.unitn.it', password='cavallo_bello')\ncursor = connection.cursor()",
"_____no_output_____"
],
[
"connection.close()",
"_____no_output_____"
],
[
"print(connection)\nprint(cursor)",
"<connection object at 0x7f6e16ae3890; dsn: 'user=db_057 password=xxx dbname=db_057 host=sci-didattica.unitn.it', closed: 0>\n<cursor object at 0x7f6e16ac64f0; closed: 0>\n"
]
],
[
[
"1. Fa il drop delle due tabelle dalla base di dati se sono già presenti",
"_____no_output_____"
]
],
[
[
"cursor.execute('DROP TABLE IF EXISTS \"Boat\";')\nconnection.commit()\ncursor.execute('DROP TABLE IF EXISTS \"Sailor\";')\nconnection.commit()",
"_____no_output_____"
]
],
[
[
"2. Crea le due tabelle come descritto sopra.",
"_____no_output_____"
]
],
[
[
"cursor.execute('CREATE TABLE \"Sailor\" (\"id\" INT PRIMARY KEY, \"name\" CHAR(50) NOT NULL, \"address\" CHAR(50) NOT NULL, \"age\" INT NOT NULL, \"level\" FLOAT NOT NULL);')\nconnection.commit()\ncursor.execute('CREATE TABLE \"Boat\" (\"bid\" CHAR(25) PRIMARY KEY, \"bname\" CHAR(50) NOT NULL, \"size\" CHAR(30) NOT NULL, \"captain\" INT NOT NULL REFERENCES \"Sailor\"(\"id\"));')\nconnection.commit()\n",
"_____no_output_____"
]
],
[
[
"3. Genera 1 milione di tuple (casuali1 ), in modo tale che ogni tupla abbia un valore diverso per l’attributo level, e\n le inserisce nella tabella S ailor. Assicurarsi inoltre che l’ultima tupla inserita, e solo quella, abbia come valore\n dell’attributo level, il valore 185.",
"_____no_output_____"
]
],
[
[
"start_time = time_ns() #inizioe conteggio tempo\nid = range(186, 3000000)\nlevel = random.sample(range(18600, 3000000), TABLE_LENGTH-1)\nfor i in range(0,len(level),1):\n level[i] = level[i]/100 \nlevel.append(185.00)\ntabella = []\nfor i in range(0,TABLE_LENGTH,1):\n riga = {\"id\":id[i], \"name\": get_random_string(12), \"address\": get_random_string(42),\"age\": random.randint(0,60),\"level\": level[i]}\n tabella.append(riga)\n\n\ntemp = f'''INSERT INTO \"Sailor\" (\"id\", \"name\", \"address\", \"age\", \"level\") VALUES (%(id)s, %(name)s,%(address)s, %(age)s, %(level)s )'''\ncursor.executemany(temp,tabella)\nconnection.commit()\nend_time = time_ns() # calcola fine",
"Load data in \"Sailor\"\n[-]\n"
]
],
[
[
"4. Genera 1 ulteriore milione di tuple (casuali) e le inserisce nella tabella B oat.",
"_____no_output_____"
]
],
[
[
"start_time = time_ns() #inizioe conteggio tempo\nbid = get_random_bid()\nsize = [\"large\", \"medium\", \"small\"]\nsize_list = []\nfor i in range(0,TABLE_LENGTH,1):\n size_list.append(size[random.randint(0,2)])\n\ncaptain = []\nfor i in range(0,TABLE_LENGTH,1):\n captain.append(id[random.randint(0,TABLE_LENGTH-1)])\ntabella = []\nfor i in range(0,TABLE_LENGTH,1):\n riga = {\"bid\":bid[i], \"bname\": get_random_string(12), \"size\": size_list[i],\"captain\": captain[i]}\n tabella.append(riga)\nend_time = time_ns() # calcola fine\nprint(f\"Step 4 needs {end_time - start_time} ns\") # calcola fine",
"Load data in \"Boat\"\n[-]\n"
],
[
"pd.read_sql('''SELECT * FROM \"Sailor\"''', connection)",
"_____no_output_____"
]
],
[
[
"5. Ottiene dal database tutti gli id del milione di tuple della tabella Sailor e li stampa su stderr .",
"_____no_output_____"
]
],
[
[
"cursor.execute(\"\"\" SELECT id FROM \"Sailor\" \"\"\")\nlista = cursor.fetchall()\nfor i in lista:\n print(i[0],file=sys.stderr)",
"id\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n"
]
],
[
[
"6. Tutte le tuple con valore di level pari a 185 vengono modificate, cambiando il valore di level a 200 (la vostra\n query dovrà funzionare anche se la base di dati contiene più di una tupla con valore di level pari a 185). ",
"_____no_output_____"
]
],
[
[
"cursor.execute('''UPDATE \"Sailor\" SET \"level\" = 200 WHERE \"level\" = 185''')\nconnection.commit()",
"_____no_output_____"
]
],
[
[
"7. Seleziona l’id e l’address di tutte le tuple della tabella Sailor che hanno valore di level pari a 200, e li stampa su stderr.",
"_____no_output_____"
]
],
[
[
"cursor.execute('''SELECT id, address FROM \"Sailor\" as sl WHERE \"level\" = 200''')\nlista = cursor.fetchall()\nfor i in lista:\n print(f\"{i[0]},{i[1]}\",file=sys.stderr)\n",
"_____no_output_____"
]
],
[
[
"8. Crea un indice B+tree sull’attributo level.",
"_____no_output_____"
]
],
[
[
"cursor.execute('DROP INDEX IF EXISTS \"index_level\";')\nconnection.commit()",
"_____no_output_____"
],
[
"cursor.execute('CREATE INDEX index_level ON \"Sailor\" (\"level\");')\nconnection.commit()",
"_____no_output_____"
]
],
[
[
"9. Ottiene dal database tutti gli id del milione di tuple della tabella Sailor e li stampa su stderr .",
"_____no_output_____"
]
],
[
[
"cursor.execute(\"\"\" SELECT id FROM \"Sailor\" \"\"\")\nlista = cursor.fetchall()\nfor i in lista:\n print(i[0],file=sys.stderr)",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n"
]
],
[
[
"10. Tutte le tuple con valore di level pari a 200 vengono modificate, cambiando il valore di level a 210 (la vostra\n query dovrà funzionare anche se la base di dati contiene più di una tupla con valore di level pari a 200).",
"_____no_output_____"
]
],
[
[
"cursor.execute('''UPDATE \"Sailor\" SET \"level\" = 210 WHERE \"level\" = 200''')\nconnection.commit()",
"_____no_output_____"
]
],
[
[
"11. Seleziona l’id e l’address di tutte le tuple della tabella Sailor che hanno valore di level pari a 210, e li stampa su stderr.",
"_____no_output_____"
]
],
[
[
"cursor.execute('''SELECT id, address FROM \"Sailor\" as sl WHERE \"level\" = 210''')\nlista = cursor.fetchall()\nfor i in lista:\n print(f\"{i[0]},{i[1]}\",file=sys.stderr)",
"99,BYLBFIATYiLMRoHIYIHOTJejjmqTjYzxHZLeMJkkqS \n"
],
[
"pd.read_sql('''SELECT * FROM \"Sailor\"''', connection)",
"_____no_output_____"
],
[
"pd.read_sql('''SELECT * FROM \"Boat\"''', connection)",
"_____no_output_____"
],
[
"def product(*args, repeat=1):\n # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy\n # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111\n pools = [tuple(pool) for pool in args] * repeat\n result = [[]]\n for pool in pools:\n result = [x+[y] for x in result for y in pool]\n for prod in result:\n yield tuple(prod)",
"_____no_output_____"
],
[
"def get_random_bid(length):\n # Random string with the combination of lower and upper case\n letters = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','X','Y','Z','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','x','y','z']\n return [''.join(i) for i in product(letters, repeat = length)]",
"_____no_output_____"
],
[
"print(get_random_bid(25))",
"_____no_output_____"
],
[
"chars = ['A','B','C','D','E','F','G','H','I','J','K']\nprint(len(chars))\nlista = [''.join(x) for x in product(chars, repeat=6)]\nprint(len(lista))",
"11\n1771561\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e734b5452013e3055e3c8a6488f4340f4aa5ec8f | 3,070 | ipynb | Jupyter Notebook | notebooks/examples/99_kotlin-numpy.ipynb | gitter-badger/ntakt | d1256469d167b4a47dfa366cae456f5ce6414f91 | [
"BSD-2-Clause"
] | null | null | null | notebooks/examples/99_kotlin-numpy.ipynb | gitter-badger/ntakt | d1256469d167b4a47dfa366cae456f5ce6414f91 | [
"BSD-2-Clause"
] | null | null | null | notebooks/examples/99_kotlin-numpy.ipynb | gitter-badger/ntakt | d1256469d167b4a47dfa366cae456f5ce6414f91 | [
"BSD-2-Clause"
] | null | null | null | 33.010753 | 259 | 0.587296 | [
[
[
"// set up dependencies\n\n// requires Python and numpy installation. Worked with Python==3.6 but not Python>=3.7\n\n// use local repo for now; not deployed to remote maven repo yet\n@file:Repository(\"*mavenLocal\")\n@file:Repository(\"https://maven.scijava.org/content/groups/public\")\n@file:Repository(\"https://dl.bintray.com/kotlin/kotlin-numpy\")\n@file:Repository(\"https://jitpack.io\")\n\n@file:DependsOn(\"org.jetbrains:kotlin-numpy:0.1.5\")\n\n// uncomment to search in your local maven repo\n// requires installation into local maven repository (./gradlew build publishToMavenLocal)\n@file:DependsOn(\"org.ntakt:ntakt:0.1.0-SNAPSHOT\")\n\n// uncomment to search in jitpack (TODO)\n// @file:DependsOn(\"com.github.saalfeldlab:ntakt:<tbd>\")",
"_____no_output_____"
],
[
"// We need ArrayImgs because we cannot pass access into ntakt convenience functions yet.\nimport net.imglib2.img.array.ArrayImgs\nimport org.jetbrains.numkt.core.*\nimport org.jetbrains.numkt.math.*\nimport org.jetbrains.numkt.*\nimport org.ntakt.*\nimport org.ntakt.access.DoubleBufferAccess",
"_____no_output_____"
],
[
"val a = arange(15.0)\nval access = DoubleBufferAccess(a.data!!)\nval dimg = ArrayImgs.doubles(access, a.size.toLong())\nprintln(dimg.flatStringRepresentation)\nprintln(a)\ndimg.forEach { it.set(kotlin.math.sqrt(it.realDouble)) }\nprintln(dimg.flatStringRepresentation)\nprintln(a)",
"ArrayImg [15]: [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0]\n[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.]\nArrayImg [15]: [0.0, 1.0, 1.4142135623730951, 1.7320508075688772, 2.0, 2.23606797749979, 2.449489742783178, 2.6457513110645907, 2.8284271247461903, 3.0, 3.1622776601683795, 3.3166247903554, 3.4641016151377544, 3.605551275463989, 3.7416573867739413]\n[0. 1. 1.41421356 1.73205081 2. 2.23606798\n 2.44948974 2.64575131 2.82842712 3. 3.16227766 3.31662479\n 3.46410162 3.60555128 3.74165739]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e734d3988218f7ad7ebd9b181ed83458126509e8 | 157,642 | ipynb | Jupyter Notebook | labs/lab07.ipynb | ClaudioFigueroa/mat281_portfolio | ecd21457f61a5c1b781fe5f54f72348c9e652173 | [
"MIT"
] | null | null | null | labs/lab07.ipynb | ClaudioFigueroa/mat281_portfolio | ecd21457f61a5c1b781fe5f54f72348c9e652173 | [
"MIT"
] | null | null | null | labs/lab07.ipynb | ClaudioFigueroa/mat281_portfolio | ecd21457f61a5c1b781fe5f54f72348c9e652173 | [
"MIT"
] | null | null | null | 261.429519 | 138,831 | 0.636823 | [
[
[
"# Laboratorio 7",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport altair as alt\n\nfrom sklearn import datasets, linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\nalt.themes.enable('opaque')\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"En este laboratorio utilizaremos los mismos datos de diabetes vistos en la clase",
"_____no_output_____"
]
],
[
[
"diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True, as_frame=True)\ndiabetes = pd.concat([diabetes_X, diabetes_y], axis=1)\ndiabetes.head()",
"_____no_output_____"
]
],
[
[
"## Pregunta 1\n\n(1 pto)\n\n* ¿Por qué la columna de sexo tiene esos valores?\n* ¿Cuál es la columna a predecir?\n* ¿Crees que es necesario escalar o transformar los datos antes de comenzar el modelamiento?",
"_____no_output_____"
],
[
"__Respuesta:__\n\n1.Tiene esos valores por que se cuantificaron los sexos, atribuyendoles la misma distancia desde el origen, y debido a la cantidad de cada unoy su posición es que son. \\\\\n2.La columna target\n3.Si, debido a que así se puede ver su influencia ponderada respecto a otro dato de la misma columna.",
"_____no_output_____"
],
[
"## Pregunta 2\n\n(1 pto)\n\nRealiza dos regresiones lineales con todas las _features_, el primer caso incluyendo intercepto y el segundo sin intercepto. Luego obtén la predicción para así calcular el error cuadrático medio y coeficiente de determinación de cada uno de ellos.",
"_____no_output_____"
]
],
[
[
"d_x=diabetes.drop(\"target\",axis=1)\nd_y=diabetes[\"target\"]\nregr_with_incerpet = LinearRegression(fit_intercept=True)# FIX ME PLEASE #\nregr_with_incerpet.fit(d_x, d_y)",
"_____no_output_____"
],
[
"diabetes_y_pred_with_intercept = regr_with_incerpet.predict(d_x)",
"_____no_output_____"
],
[
"# Coeficientes\nprint(f\"Coefficients: \\n{regr_with_incerpet.coef_}\\n\")\n# Intercepto\nprint(f\"Intercept: \\n{regr_with_incerpet.intercept_}\\n\")\n# Error cuadrático medio\nprint(f\"Mean squared error: {mean_squared_error(d_y,diabetes_y_pred_with_intercept):.2f}\\n\")\n# Coeficiente de determinación\nprint(f\"Coefficient of determination: {r2_score(d_y,diabetes_y_pred_with_intercept):.2f}\")",
"Coefficients: \n[ -10.01219782 -239.81908937 519.83978679 324.39042769 -792.18416163\n 476.74583782 101.04457032 177.06417623 751.27932109 67.62538639]\n\nIntercept: \n152.1334841628965\n\nMean squared error: 2859.69\n\nCoefficient of determination: 0.52\n"
],
[
"regr_without_incerpet = LinearRegression(fit_intercept=False)# FIX ME PLEASE #\nregr_without_incerpet.fit(d_x, d_y)",
"_____no_output_____"
],
[
"diabetes_y_pred_without_intercept = regr_without_incerpet.predict(d_x) # FIX ME PLEASE #",
"_____no_output_____"
],
[
"# Coeficientes\nprint(f\"Coefficients: \\n{regr_without_incerpet.coef_}\\n\")\n# Error cuadrático medio\nprint(f\"Mean squared error: {mean_squared_error(d_y,diabetes_y_pred_without_intercept):.2f}\\n\")\n# Coeficiente de determinación\nprint(f\"Coefficient of determination: {r2_score(d_y,diabetes_y_pred_without_intercept):.2f}\")",
"Coefficients: \n[ -10.01219782 -239.81908937 519.83978679 324.39042769 -792.18416163\n 476.74583782 101.04457032 177.06417623 751.27932109 67.62538639]\n\nMean squared error: 26004.29\n\nCoefficient of determination: -3.39\n"
]
],
[
[
"**Pregunta: ¿Qué tan bueno fue el ajuste del modelo?**",
"_____no_output_____"
],
[
"__Respuesta:__\n El ajuste se ve muy malo debido a que entrega un gran error.",
"_____no_output_____"
],
[
"## Pregunta 3\n\n(1 pto)\n\nRealizar multiples regresiones lineales utilizando una sola _feature_ a la vez. \n\nEn cada iteración:\n\n- Crea un arreglo `X`con solo una feature filtrando `X`.\n- Crea un modelo de regresión lineal con intercepto.\n- Ajusta el modelo anterior.\n- Genera una predicción con el modelo.\n- Calcula e imprime las métricas de la pregunta anterior.",
"_____no_output_____"
]
],
[
[
"for col in [\"age\",\"sex\",\"bmi\",\"bp\",\"s1\",\"s2\",\"s3\",\"s4\",\"s5\",\"s6\"]:\n X_i = np.array([np.ones(diabetes[col].shape), diabetes[col]]).T\n regr_i = LinearRegression(fit_intercept=True)\n regr_i.fit(X_i,diabetes['target'])\n diabetes_y_pred_i = regr_i.predict(X_i) # FIX ME PLEASE #\n print(f\"Feature: {col}\")\n print(f\"\\tCoefficients: {regr_i.coef_[1]}\")\n print(f\"\\tIntercept: {regr_i.intercept_}\")\n print(f\"\\tMean squared error: {mean_squared_error(diabetes['target'],diabetes_y_pred_i):.2f}\\n\")\n print(f\"\\tCoefficient of determination: {r2_score(diabetes['target'],diabetes_y_pred_i):.2f}\")",
"Feature: age\n\tCoefficients: 304.1830745282946\n\tIntercept: 152.13348416289605\n\tMean squared error: 5720.55\n\n\tCoefficient of determination: 0.04\nFeature: sex\n\tCoefficients: 69.71535567841468\n\tIntercept: 152.13348416289594\n\tMean squared error: 5918.89\n\n\tCoefficient of determination: 0.00\nFeature: bmi\n\tCoefficients: 949.4352603839493\n\tIntercept: 152.1334841628967\n\tMean squared error: 3890.46\n\n\tCoefficient of determination: 0.34\nFeature: bp\n\tCoefficients: 714.7416437042881\n\tIntercept: 152.13348416289585\n\tMean squared error: 4774.10\n\n\tCoefficient of determination: 0.19\nFeature: s1\n\tCoefficients: 343.25445188896424\n\tIntercept: 152.13348416289597\n\tMean squared error: 5663.32\n\n\tCoefficient of determination: 0.04\nFeature: s2\n\tCoefficients: 281.7845933524593\n\tIntercept: 152.1334841628959\n\tMean squared error: 5750.24\n\n\tCoefficient of determination: 0.03\nFeature: s3\n\tCoefficients: -639.1452793225127\n\tIntercept: 152.13348416289566\n\tMean squared error: 5005.66\n\n\tCoefficient of determination: 0.16\nFeature: s4\n\tCoefficients: 696.8830300922425\n\tIntercept: 152.13348416289568\n\tMean squared error: 4831.14\n\n\tCoefficient of determination: 0.19\nFeature: s5\n\tCoefficients: 916.138722815098\n\tIntercept: 152.13348416289628\n\tMean squared error: 4030.99\n\n\tCoefficient of determination: 0.32\nFeature: s6\n\tCoefficients: 619.2228206843336\n\tIntercept: 152.13348416289614\n\tMean squared error: 5062.38\n\n\tCoefficient of determination: 0.15\n"
]
],
[
[
"**Pregunta: Si tuvieras que escoger una sola _feauture_, ¿Cuál sería? ¿Por qué?**",
"_____no_output_____"
],
[
"**Respuesta: Sería el bmi de debido a que posee el error medio cuadratico mas bajo, y el coeficiente de determinacion mas alto.",
"_____no_output_____"
],
[
"## Ejercicio 4\n\n(1 pto)\n\nCon la feature escogida en el ejercicio 3 realiza el siguiente gráfico:\n\n- Scatter Plot\n- Eje X: Valores de la feature escogida.\n- Eje Y: Valores de la columna a predecir (target).\n- En color rojo dibuja la recta correspondiente a la regresión lineal (utilizando `intercept_`y `coefs_`).\n- Coloca un título adecuado, nombre de los ejes, etc.\n\nPuedes utilizar `matplotlib` o `altair`, el que prefieras.",
"_____no_output_____"
]
],
[
[
"regr = linear_model.LinearRegression(fit_intercept=True).fit(np.array([np.ones(diabetes[\"bmi\"].shape), diabetes[\"bmi\"]]).T, diabetes[\"target\"])",
"_____no_output_____"
],
[
"xp=np.arange(-0.2,0.3,0.02)\nyp=regr.coef_[1]*xp+regr.intercept_\ndf=pd.DataFrame({'xp': xp, 'yp': yp})\nalt.Chart(diabetes).mark_circle(size=60).encode(\n x='bmi',\n y='target'\n)+alt.Chart(df).mark_line(color='red').encode(\n x=alt.X('xp', title='bmi'),\n y= alt.Y('yp', title='Target'),\n\n)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e734d3ae2dfab1a4fb19a0211728a1c9729d090f | 329,330 | ipynb | Jupyter Notebook | courses/fast-and-lean-data-science/05K_MNIST_TF20Keras_Tensorboard_solution.ipynb | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | 2 | 2022-01-06T11:52:57.000Z | 2022-01-09T01:53:56.000Z | courses/fast-and-lean-data-science/05K_MNIST_TF20Keras_Tensorboard_solution.ipynb | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | null | null | null | courses/fast-and-lean-data-science/05K_MNIST_TF20Keras_Tensorboard_solution.ipynb | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | null | null | null | 243.587278 | 35,281 | 0.909282 | [
[
[
"## MNIST in Keras with Tensorboard\n\nThis sample trains an \"MNIST\" handwritten digit \nrecognition model on a GPU or TPU backend using a Keras\nmodel. Data are handled using the tf.data.Datset API. This is\na very simple sample provided for educational purposes. Do\nnot expect outstanding TPU performance on a dataset as\nsmall as MNIST.",
"_____no_output_____"
],
[
"### Parameters",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 64\nLEARNING_RATE = 0.002\n# GCS bucket for training logs and for saving the trained model\n# You can leave this empty for local saving, unless you are using a TPU.\n# TPUs do not have access to your local instance and can only write to GCS.\nBUCKET=\"gs://ml1-demo-martin/mnist\" # a valid bucket name must start with gs://\n\ntraining_images_file = 'gs://mnist-public/train-images-idx3-ubyte'\ntraining_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'\nvalidation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'\nvalidation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'",
"_____no_output_____"
]
],
[
[
"### Imports",
"_____no_output_____"
]
],
[
[
"import os, re, math, json, time\nimport PIL.Image, PIL.ImageFont, PIL.ImageDraw\nimport numpy as np\nimport tensorflow as tf\nfrom matplotlib import pyplot as plt\nfrom tensorflow.python.platform import tf_logging\nprint(\"Tensorflow version \" + tf.__version__)",
"Tensorflow version 2.2.0-dlenv\n"
]
],
[
[
"## TPU/GPU detection",
"_____no_output_____"
]
],
[
[
"try: # detect TPUs\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\n tf.config.experimental_connect_to_cluster(tpu)\n tf.tpu.experimental.initialize_tpu_system(tpu)\n strategy = tf.distribute.experimental.TPUStrategy(tpu)\nexcept ValueError: # detect GPUs\n strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines\n #strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU\n #strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines\n\nprint(\"Number of accelerators: \", strategy.num_replicas_in_sync)\n \n# adjust batch size and learning rate for distributed computing\nglobal_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs.\nlearning_rate = LEARNING_RATE * strategy.num_replicas_in_sync",
"INFO:tensorflow:Initializing the TPU system: martin-tpuv3-8-tf22\n"
],
[
"#@title visualization utilities [RUN ME]\n\"\"\"\nThis cell contains helper functions used for visualization\nand downloads only. You can skip reading it. There is very\nlittle useful Keras/Tensorflow code here.\n\"\"\"\n\n# Matplotlib config\nplt.rc('image', cmap='gray_r')\nplt.rc('grid', linewidth=0)\nplt.rc('xtick', top=False, bottom=False, labelsize='large')\nplt.rc('ytick', left=False, right=False, labelsize='large')\nplt.rc('axes', facecolor='F8F8F8', titlesize=\"large\", edgecolor='white')\nplt.rc('text', color='a8151a')\nplt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts\nMATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), \"mpl-data/fonts/ttf\")\n\n# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)\ndef dataset_to_numpy_util(training_dataset, validation_dataset, N):\n \n # get one batch from each: 10000 validation digits, N training digits\n unbatched_train_ds = training_dataset.unbatch()\n \n if tf.executing_eagerly():\n # This is the TF 2.0 \"eager execution\" way of iterating through a tf.data.Dataset\n for v_images, v_labels in validation_dataset:\n break\n\n for t_images, t_labels in unbatched_train_ds.batch(N):\n break\n\n validation_digits = v_images.numpy()\n validation_labels = v_labels.numpy()\n training_digits = t_images.numpy()\n training_labels = t_labels.numpy()\n else:\n # This is the legacy TF 1.x way of iterating through a tf.data.Dataset\n v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()\n t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()\n # Run once, get one batch. Session.run returns numpy results\n with tf.Session() as ses:\n (validation_digits, validation_labels,\n training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])\n \n # these were one-hot encoded in the dataset\n validation_labels = np.argmax(validation_labels, axis=1)\n training_labels = np.argmax(training_labels, axis=1)\n \n return (training_digits, training_labels,\n validation_digits, validation_labels)\n\n# create digits from local fonts for testing\ndef create_digits_from_local_fonts(n):\n font_labels = []\n img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1\n font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)\n font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)\n d = PIL.ImageDraw.Draw(img)\n for i in range(n):\n font_labels.append(i%10)\n d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)\n font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)\n font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])\n return font_digits, font_labels\n\n# utility to display a row of digits with their predictions\ndef display_digits(digits, predictions, labels, title, n):\n plt.figure(figsize=(13,3))\n digits = np.reshape(digits, [n, 28, 28])\n digits = np.swapaxes(digits, 0, 1)\n digits = np.reshape(digits, [28, 28*n])\n plt.yticks([])\n plt.xticks([28*x+14 for x in range(n)], predictions)\n for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):\n if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red\n plt.imshow(digits)\n plt.grid(None)\n plt.title(title)\n \n# utility to display multiple rows of digits, sorted by unrecognized/recognized status\ndef display_top_unrecognized(digits, predictions, labels, n, lines):\n idx = np.argsort(predictions==labels) # sort order: unrecognized first\n for i in range(lines):\n display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],\n \"{} sample validation digits out of {} with bad predictions in red and sorted first\".format(n*lines, len(digits)) if i==0 else \"\", n)",
"_____no_output_____"
]
],
[
[
"### Colab-only auth for this notebook and the TPU",
"_____no_output_____"
]
],
[
[
"#IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence\n#if IS_COLAB_BACKEND:\n# from google.colab import auth\n# auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets",
"_____no_output_____"
]
],
[
[
"### tf.data.Dataset: parse files and prepare training and validation datasets\nPlease read the [best practices for building](https://www.tensorflow.org/guide/performance/datasets) input pipelines with tf.data.Dataset",
"_____no_output_____"
]
],
[
[
"def read_label(tf_bytestring):\n label = tf.io.decode_raw(tf_bytestring, tf.uint8)\n label = tf.reshape(label, [])\n label = tf.one_hot(label, 10)\n return label\n \ndef read_image(tf_bytestring):\n image = tf.io.decode_raw(tf_bytestring, tf.uint8)\n image = tf.cast(image, tf.float32)/256.0\n image = tf.reshape(image, [28*28])\n return image\n \ndef load_dataset(image_file, label_file):\n imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)\n imagedataset = imagedataset.map(read_image, num_parallel_calls=16)\n labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)\n labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)\n dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))\n return dataset \n \ndef get_training_dataset(image_file, label_file, batch_size):\n dataset = load_dataset(image_file, label_file)\n dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset\n dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)\n dataset = dataset.repeat() # Mandatory for Keras for now\n dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed\n dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)\n return dataset\n \ndef get_validation_dataset(image_file, label_file):\n dataset = load_dataset(image_file, label_file)\n dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset\n dataset = dataset.repeat() # Mandatory for Keras for now\n dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch\n return dataset\n\n# instantiate the datasets\ntraining_dataset = get_training_dataset(training_images_file, training_labels_file, global_batch_size)\nvalidation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)",
"_____no_output_____"
]
],
[
[
"### Let's have a look at the data",
"_____no_output_____"
]
],
[
[
"N = 24\n(training_digits, training_labels,\n validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)\ndisplay_digits(training_digits, training_labels, training_labels, \"training digits and their labels\", N)\ndisplay_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], \"validation digits and their labels\", N)\nfont_digits, font_labels = create_digits_from_local_fonts(N)",
"_____no_output_____"
]
],
[
[
"### Keras model: 3 convolutional layers, 2 dense layers",
"_____no_output_____"
]
],
[
[
"# This model trains to 99.4%— sometimes 99.5%— accuracy in 10 epochs (with a batch size of 64)\n\ndef make_model():\n \n model = tf.keras.Sequential(\n [\n tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),\n\n tf.keras.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm\n tf.keras.layers.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before \"relu\"\n tf.keras.layers.Activation('relu'), # activation after batch norm\n\n tf.keras.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2),\n tf.keras.layers.BatchNormalization(scale=False, center=True),\n tf.keras.layers.Activation('relu'),\n\n tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),\n tf.keras.layers.BatchNormalization(scale=False, center=True),\n tf.keras.layers.Activation('relu'),\n\n tf.keras.layers.Flatten(),\n \n tf.keras.layers.Dense(200, use_bias=False),\n tf.keras.layers.BatchNormalization(scale=False, center=True),\n tf.keras.layers.Activation('relu'),\n \n tf.keras.layers.Dropout(0.5), # Dropout on dense layer only\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n\n model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n return model\n \nwith strategy.scope(): # the new way of handling distribution strategies in Tensorflow 1.14+\n model = make_model()\n\n# print model layers\nmodel.summary()\n \n# set up learning rate decay\nlr_decay = tf.keras.callbacks.LearningRateScheduler(lambda epoch: learning_rate * math.pow(0.5, 1+epoch) + learning_rate/200, verbose=True)\n\n# set up Tensorboard logs\ntimestamp = time.strftime(\"%Y-%m-%d-%H-%M-%S\")\nlog_dir=os.path.join(BUCKET, 'mnist-logs', timestamp)\ntb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, update_freq=50*global_batch_size)\nprint(\"Tensorboard loggs written to: \", log_dir)",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nreshape (Reshape) (None, 28, 28, 1) 0 \n_________________________________________________________________\nconv2d (Conv2D) (None, 28, 28, 6) 54 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 28, 28, 6) 18 \n_________________________________________________________________\nactivation (Activation) (None, 28, 28, 6) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 14, 14, 12) 2592 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 14, 14, 12) 36 \n_________________________________________________________________\nactivation_1 (Activation) (None, 14, 14, 12) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 7, 7, 24) 10368 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 7, 7, 24) 72 \n_________________________________________________________________\nactivation_2 (Activation) (None, 7, 7, 24) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 1176) 0 \n_________________________________________________________________\ndense (Dense) (None, 200) 235200 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 200) 600 \n_________________________________________________________________\nactivation_3 (Activation) (None, 200) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 200) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 2010 \n=================================================================\nTotal params: 250,950\nTrainable params: 250,466\nNon-trainable params: 484\n_________________________________________________________________\nTensorboard loggs written to: gs://ml1-demo-martin/mnist/mnist-logs/2020-06-23-23-46-45\n"
]
],
[
[
"### Train and validate the model",
"_____no_output_____"
]
],
[
[
"EPOCHS = 10\nsteps_per_epoch = 60000//global_batch_size # 60,000 items in this dataset\nprint(\"Step (batches) per epoch: \", steps_per_epoch)\n \nhistory = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,\n validation_data=validation_dataset, validation_steps=1, callbacks=[lr_decay, tb_callback])",
"Step (batches) per epoch: 117\n\nEpoch 00001: LearningRateScheduler reducing learning rate to 0.00808.\nEpoch 1/10\n 2/117 [..............................] - ETA: 2:33 - accuracy: 0.3652 - loss: 2.0532WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (1.480141). Check your callbacks.\n"
]
],
[
[
"### Visualize predictions",
"_____no_output_____"
]
],
[
[
"# recognize digits from local fonts\nprobabilities = model.predict(font_digits, steps=1)\npredicted_labels = np.argmax(probabilities, axis=1)\ndisplay_digits(font_digits, predicted_labels, font_labels, \"predictions from local fonts (bad predictions in red)\", N)\n\n# recognize validation digits\nprobabilities = model.predict(validation_digits, steps=1)\npredicted_labels = np.argmax(probabilities, axis=1)\ndisplay_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)",
"_____no_output_____"
]
],
[
[
"## License",
"_____no_output_____"
],
[
"\n\n---\n\n\nauthor: Martin Gorner<br>\ntwitter: @martin_gorner\n\n\n---\n\n\nCopyright 2020 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n---\n\n\nThis is not an official Google product but sample code provided for an educational purpose\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e734db414ee14895f45174b49a05d9237ca038de | 4,918 | ipynb | Jupyter Notebook | pilas.ipynb | pandemicbat801/daa_2021_1 | 4b912d0ca5631882f8137583dfbc25280ec8c574 | [
"MIT"
] | null | null | null | pilas.ipynb | pandemicbat801/daa_2021_1 | 4b912d0ca5631882f8137583dfbc25280ec8c574 | [
"MIT"
] | null | null | null | pilas.ipynb | pandemicbat801/daa_2021_1 | 4b912d0ca5631882f8137583dfbc25280ec8c574 | [
"MIT"
] | null | null | null | 31.935065 | 229 | 0.422936 | [
[
[
"<a href=\"https://colab.research.google.com/github/pandemicbat801/daa_2021_1/blob/master/pilas.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"class stack:\n def __init__ (self):\n self.__datos=[]\n\n def is_empty(self):\n return len(self.__datos)==0\n\n def get_top(self):\n return self.__datos[-1]\n\n def pop(self):\n return self.__datos.pop()\n\n def push(self,valor):\n self.__datos.append(valor)\n\n def get_length(self):\n return len(self.__datos)\n\n def to_string(self):\n print('--------------------')\n for ele in self.__datos[-1::-1]:\n print(f'{ele}')\n print('--------------------')\n\n",
"_____no_output_____"
],
[
"import re\ndef leer():\n pila=stack()\n\n patron_parentesi=re.compile(r'\\(')\n patron_parentesisCierre=re.compile(r'\\)')\n patron_corchete=re.compile(r'\\[')\n patron_corcheteCierre=re.compile(r'\\]')\n patron_llave=re.compile(r'\\{')\n patron_llaveCierre=re.compile(r'\\}')\n\n f = open(\"prueba.java\", \"r\")\n while(True):\n linea = f.readline()\n\n resultado_parentesis=re.search(patron_parentesi,linea)\n resultado_parentesisCierre=re.search(patron_parentesisCierre,linea)\n resultado_corchete=re.search(patron_corchete,linea)\n resultado_corcheteCierre=re.search(patron_corcheteCierre,linea)\n resultado_llave=re.search(patron_llave,linea)\n resultado_llaveCierre=re.search(patron_llaveCierre,linea)\n\n\n if resultado_parentesis != None:\n pila.push('parentesis no cerrado')\n\n if resultado_parentesisCierre !=None:\n if pila.is_empty():\n pila.push('Falta parentesis de apertura')\n else:\n pila.pop()\n\n\n if resultado_corchete != None:\n pila.push('corchete no cerrado')\n\n if resultado_corcheteCierre !=None:\n if pila.is_empty():\n pila.push('Falta parentesis de apertura')\n else:\n pila.pop()\n\n\n if resultado_llave != None:\n pila.push('pico parentesis no cerrado')\n\n if resultado_llaveCierre !=None:\n if pila.is_empty():\n pila.push('Falta parentesis de apertura')\n else:\n pila.pop()\n\n if not linea:\n\n break\n f.close()\n\n print(f'{pila.to_string()}')\n if pila.is_empty():\n print('Esta balanceado')\n else:\n print('No esta balanceado')\n\nif __name__ == \"__main__\":\n leer()",
"--------------------\n--------------------\nNone\nEsta balanceado\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e734de76b2c15ce6204ace2daaa393625585e442 | 1,426 | ipynb | Jupyter Notebook | Chapter4/vsearch.ipynb | DuyTungHa/Python-Practices | 2850eadab945c1f6039552930885cd3b2e5bd992 | [
"MIT"
] | null | null | null | Chapter4/vsearch.ipynb | DuyTungHa/Python-Practices | 2850eadab945c1f6039552930885cd3b2e5bd992 | [
"MIT"
] | null | null | null | Chapter4/vsearch.ipynb | DuyTungHa/Python-Practices | 2850eadab945c1f6039552930885cd3b2e5bd992 | [
"MIT"
] | null | null | null | 20.666667 | 73 | 0.513324 | [
[
[
"def search4vowels(phrase:str) -> set:\n \"\"\"Display any vowels found in a supplied word.\"\"\"\n vowels = set('aeiou')\n return vowels.intersection(set(phrase))\n\ndef search4letters(phrase:str, letters:str) ->set:\n \"\"\"Return a set of the 'letters' found in 'phrase'.\"\"\"\n return set(letters).intersection(set(phrase))",
"_____no_output_____"
],
[
"search4letters('life, the universe, and everything', 'o')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e734e46649b7420320006ccd0c0e07da08420815 | 8,444 | ipynb | Jupyter Notebook | Data_collect-checkpoint.ipynb | fengxiaolong886/DLModelReview | 349e2c017ce73fd1e075a3ae6153e27c54df0d63 | [
"Apache-2.0"
] | 1 | 2019-12-11T15:07:34.000Z | 2019-12-11T15:07:34.000Z | Data_collect-checkpoint.ipynb | fengxiaolong886/DLModelReview | 349e2c017ce73fd1e075a3ae6153e27c54df0d63 | [
"Apache-2.0"
] | null | null | null | Data_collect-checkpoint.ipynb | fengxiaolong886/DLModelReview | 349e2c017ce73fd1e075a3ae6153e27c54df0d63 | [
"Apache-2.0"
] | null | null | null | 25.981538 | 2,801 | 0.325083 | [
[
[
"import tushare as ts\nimport os\nimport pandas as pd",
"_____no_output_____"
],
[
"today=ts.get_today_all()",
"[Getting data:]############################################################"
],
[
"oldlist=os.listdir(\"E://notebook/shsz_stockdata/alldata\")\noldlist=pd.DataFrame(oldlist)\noldlist.columns=[\"code\"]\n",
"_____no_output_____"
],
[
"def simplecode(x):\n return x[2:8]",
"_____no_output_____"
],
[
"oldlist=oldlist[\"code\"].apply(simplecode)\noldlist=oldlist.to_list()\nnewlist=today[\"code\"].to_list()\nalllist=[]\nalllist.extend(oldlist)\nalllist.extend(newlist)\nalllistset=set(alllist)\nalllist=list(alllistset)",
"_____no_output_____"
],
[
"print(len(alllist))",
"3733\n"
],
[
"newdir=\"E://notebook/shsz_stockdata/data20190617\"\ni=0\nerrorlist=[]\nfor eachcode in alllist:\n try:\n data=ts.get_hist_data(eachcode)\n datapath=os.path.join(newdir,eachcode)\n data.to_csv(datapath)\n i+=1\n if i%20==0:\n print(i)\n except:\n errorlist.extend(eachcode)\n continue\nprint(errorlist)\nprint(len(errorlist))",
"20\n40\n60\n80\n100\n120\n140\n160\n180\n200\n220\n240\n260\n280\n300\n320\n340\n360\n380\n400\n420\n440\n460\n480\n500\n520\n540\n560\n580\n600\n620\n640\n660\n680\n700\n720\n740\n760\n780\n800\n820\n840\n860\n880\n900\n920\n940\n960\n980\n1000\n1020\n1040\n1060\n1080\n1100\n1120\n1140\n1160\n1180\n1200\n1220\n1240\n1260\n1280\n1300\n1320\n1340\n1360\n1380\n1400\n1420\n1440\n1460\n1480\n1500\n1520\n1540\n1560\n1580\n1600\n1620\n1640\n1660\n1680\n1700\n1720\n1740\n1760\n1780\n1800\n1820\n1840\n1860\n1880\n1900\n1920\n1940\n1960\n1980\n2000\n2020\n2040\n2060\n2080\n2100\n2120\n2140\n2160\n2180\n2200\n2220\n2240\n2260\n2280\n2300\n2320\n2340\n2360\n2380\n2400\n2420\n2440\n2460\n2480\n2500\n2520\n2540\n2560\n2580\n2600\n2620\n2640\n2660\n2680\n2700\n2720\n2740\n2760\n2780\n2800\n2820\n2840\n2860\n2880\n2900\n2920\n2940\n2960\n2980\n3000\n3020\n3040\n3060\n3080\n3100\n3120\n3140\n3160\n3180\n3200\n3220\n3240\n3260\n3280\n3300\n3320\n3340\n3360\n3380\n3400\n3420\n3440\n3460\n3480\n3500\n3520\n3540\n3560\n3580\n3600\n3620\n3640\n['6', '0', '0', '2', '0', '5', '0', '0', '0', '0', '1', '5', '0', '0', '0', '5', '6', '9', '0', '0', '0', '7', '4', '8', '0', '0', '0', '5', '4', '2', '6', '0', '0', '7', '5', '2', '6', '0', '0', '2', '6', '3', '0', '0', '0', '5', '3', '5', '6', '0', '0', '1', '8', '1', '0', '0', '0', '7', '3', '0', '3', '0', '0', '1', '8', '6', '6', '0', '0', '0', '0', '1', '0', '0', '0', '4', '0', '6', '6', '0', '0', '6', '2', '7', '0', '0', '0', '5', '9', '4', '0', '0', '0', '5', '8', '3', '6', '0', '0', '6', '3', '1', '0', '0', '0', '4', '1', '2', '6', '0', '0', '8', '3', '2', '6', '0', '0', '5', '5', '3', '0', '0', '0', '6', '2', '1', '0', '0', '0', '7', '6', '9', '0', '0', '0', '4', '0', '5', '0', '0', '0', '6', '5', '8', '0', '0', '0', '6', '9', '9', '6', '0', '0', '7', '9', '9', '6', '0', '1', '2', '9', '9', '6', '0', '0', '7', '0', '0', '6', '0', '0', '6', '6', '9', '0', '0', '0', '9', '5', '6', '6', '0', '0', '2', '8', '6', '0', '0', '0', '5', '2', '2', '0', '0', '0', '8', '0', '5', '6', '0', '0', '0', '8', '7', '6', '0', '0', '6', '0', '7', '0', '0', '0', '8', '6', '6', '0', '0', '0', '6', '0', '2', '0', '0', '0', '5', '7', '8', '6', '0', '0', '8', '4', '9', '6', '0', '0', '4', '7', '2', '0', '0', '0', '5', '4', '9', '0', '0', '0', '0', '4', '7', '0', '0', '0', '6', '6', '0', '6', '0', '0', '6', '4', '6', '6', '0', '0', '6', '2', '5', '6', '0', '0', '6', '7', '2', '0', '0', '0', '5', '8', '8', '0', '0', '0', '5', '6', '2', '6', '0', '0', '2', '5', '3', '6', '0', '0', '3', '5', '7', '0', '0', '0', '0', '0', '3', '0', '0', '0', '6', '5', '3', '6', '0', '0', '6', '5', '9', '6', '0', '0', '0', '6', '5', '0', '0', '0', '5', '5', '6', '6', '0', '0', '6', '5', '6', '6', '0', '0', '0', '9', '2', '0', '0', '0', '6', '8', '9', '6', '0', '0', '8', '1', '3', '0', '0', '0', '7', '6', '3', '0', '0', '0', '0', '2', '4', '6', '0', '0', '0', '0', '3', '6', '0', '0', '0', '0', '2', '6', '0', '0', '7', '8', '8', '6', '0', '0', '8', '9', '9', '6', '0', '0', '7', '7', '2', '6', '0', '0', '1', '4', '5', '0', '0', '0', '5', '0', '8', '6', '0', '0', '2', '9', '6', '0', '0', '0', '6', '1', '8', '0', '0', '0', '0', '2', '9', '6', '0', '0', '8', '4', '2', '6', '0', '0', '7', '6', '2', '6', '0', '0', '8', '7', '8', '6', '0', '0', '7', '0', '9', '6', '0', '0', '5', '9', '1', '0', '0', '0', '8', '3', '2', '6', '0', '1', '2', '6', '8', '6', '0', '0', '6', '3', '2', '0', '0', '0', '5', '1', '5', '0', '0', '0', '5', '2', '7', '6', '0', '0', '8', '5', '2', '0', '0', '0', '0', '1', '3', '0', '0', '0', '8', '2', '7', '0', '0', '0', '8', '1', '7', '6', '0', '0', '7', '8', '6', '0', '0', '0', '7', '6', '5', '0', '0', '0', '7', '8', '7', '6', '0', '0', '1', '0', '2', '6', '0', '0', '9', '9', '1', '0', '0', '0', '6', '7', '5', '6', '0', '0', '6', '7', '0', '6', '0', '0', '8', '4', '0']\n558\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e734e95cf2953a4f8ad1bd58ba18a73eb2de63d1 | 7,618 | ipynb | Jupyter Notebook | 1-mnist/exp4-keras-dnn.ipynb | hustrlee/21-projects | 450ce425175af884b217091bfaa5b10374b1b3a6 | [
"MIT"
] | null | null | null | 1-mnist/exp4-keras-dnn.ipynb | hustrlee/21-projects | 450ce425175af884b217091bfaa5b10374b1b3a6 | [
"MIT"
] | null | null | null | 1-mnist/exp4-keras-dnn.ipynb | hustrlee/21-projects | 450ce425175af884b217091bfaa5b10374b1b3a6 | [
"MIT"
] | null | null | null | 36.980583 | 289 | 0.515621 | [
[
[
"import tensorflow_datasets as tfds\nimport numpy as np\n\nmnist_train, train_info = tfds.load(name=\"mnist\", split=\"train\", data_dir=\"./mnist_data/\", with_info=True)\nmnist_test, test_info = tfds.load(name=\"mnist\", split=\"test\", data_dir=\"./mnist_data/\", with_info=True)\n\nmnist_train = tfds.as_numpy(mnist_train)\nmnist_test = tfds.as_numpy(mnist_test)\n\ndef value_to_array(index: int, dim: int = 10) -> np.ndarray:\n assert index < dim, \"index 必须小于 dim\"\n res = np.zeros(dim, dtype=np.double)\n res[index] = 1\n return res\n\nX_train = np.array([el[\"image\"].flatten() for el in mnist_train], dtype=np.double)\ny_train = np.array([value_to_array(el[\"label\"]) for el in mnist_train], dtype=np.double)\nX_test = np.array([el[\"image\"].flatten() for el in mnist_test], dtype=np.double)\ny_test = np.array([value_to_array(el[\"label\"]) for el in mnist_test], dtype=np.double)",
"2022-01-27 15:49:14.002632: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.\n2022-01-27 15:49:14.002743: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)\n2022-01-27 15:49:14.034735: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz\n"
],
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\nimport tensorflow as tf\n\nmodel = Sequential()\nmodel.add(Dense(units=64, activation=\"relu\", input_dim=784))\nmodel.add(Dense(units=10, activation=\"softmax\"))\n\nmodel.compile(loss=\"categorical_crossentropy\",\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n dense (Dense) (None, 64) 50240 \n \n dense_1 (Dense) (None, 10) 650 \n \n=================================================================\nTotal params: 50,890\nTrainable params: 50,890\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"model.fit(X_train, y_train, epochs=20, batch_size=32, validation_split=0.2)",
"Epoch 1/20\n 1/1500 [..............................] - ETA: 7:12 - loss: 144.0690 - accuracy: 0.1250"
],
[
"loss_and_metrics = model.evaluate(X_test, y_test, batch_size=32)",
"313/313 [==============================] - 0s 420us/step - loss: 0.3711 - accuracy: 0.9362\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e734efb9e3ba5700a5a80ab38dce7bcfc79c79cf | 86,525 | ipynb | Jupyter Notebook | wcmnikolatags.ipynb | wcmckee/niketa | 299daf5e7076e771a9db615d8d954c57a2855827 | [
"MIT"
] | null | null | null | wcmnikolatags.ipynb | wcmckee/niketa | 299daf5e7076e771a9db615d8d954c57a2855827 | [
"MIT"
] | null | null | null | wcmnikolatags.ipynb | wcmckee/niketa | 299daf5e7076e771a9db615d8d954c57a2855827 | [
"MIT"
] | null | null | null | 72.104167 | 8,433 | 0.580849 | [
[
[
"wcm Nikola Tags\n\nconvert ipynb/py doc imports as tags for nikola blog .meta files.\n\nWhen user searches for notebook to blog with bbknikola python script also get the tags for the .meta file. Open up the .py file and convert this:\n\nblogpost.py\n\nimport requests\nimport os\nimport re\n\nInto:\n\nblogpost.meta\n\nblogpost\nblogpost\n2015/02/31 00:00:00\nrequests, os, re",
"_____no_output_____"
],
[
"Categorie can be the name of the repo.\nNeed a repo for Nikola scripts",
"_____no_output_____"
]
],
[
[
"import modulefinder\nimport runpy\nimport os\nfrom walkdir import filtered_walk, dir_paths, all_paths, file_paths\n",
"_____no_output_____"
],
[
"mwcm = modulefinder.ModuleFinder()",
"_____no_output_____"
],
[
"mwcm.any_missing()",
"_____no_output_____"
],
[
"mwcm.run_script('/home/wcmckee/github/niketa/rgdsnatch.py')",
"_____no_output_____"
],
[
"mwcm.path",
"_____no_output_____"
],
[
"mwcm.scan_code",
"_____no_output_____"
],
[
"from splinter import Browser\nbrowser = Browser()",
"_____no_output_____"
],
[
"browser.visit('http://google.com')",
"_____no_output_____"
],
[
"browser.title",
"_____no_output_____"
],
[
"browser.request_url",
"_____no_output_____"
],
[
"#browser.click_link_by_href('http://www.google.co.nz/preferences')",
"_____no_output_____"
],
[
"import time",
"_____no_output_____"
],
[
"#Code for open up web browser on library system and \n#acceot their t&c.\n#Need to get web address to go to, and button find_by_name \n#to click.\n\nfrom splinter import Browser\n\nwith Browser() as browser:\n # Visit URL\n url = \"http://www.google.com\"\n browser.visit(url)\n browser.fill('q', 'artcontrol')\n # Find and click the 'search' button\n button = browser.find_by_name('btnG')\n # Interact with elements\n button.click()\n time.sleep(5)\n if browser.is_text_present('artcontrol.me'):\n print \"Yes, the official website was found!\"\n else:\n print \"No, it wasn't found... We need to improve our SEO techniques\"\n",
"Yes, the official website was found!\n"
],
[
"mwcm.report()",
"\n Name File\n ---- ----\nm ConfigParser /usr/lib/python2.7/ConfigParser.py\nm Cookie /usr/lib/python2.7/Cookie.py\nP OpenSSL /usr/lib/python2.7/dist-packages/OpenSSL/__init__.py\nm OpenSSL.SSL /usr/lib/python2.7/dist-packages/OpenSSL/SSL.py\nm OpenSSL._util /usr/lib/python2.7/dist-packages/OpenSSL/_util.py\nm OpenSSL.crypto /usr/lib/python2.7/dist-packages/OpenSSL/crypto.py\nm OpenSSL.rand /usr/lib/python2.7/dist-packages/OpenSSL/rand.py\nm OpenSSL.version /usr/lib/python2.7/dist-packages/OpenSSL/version.py\nm Queue /usr/lib/python2.7/Queue.py\nm StringIO /usr/lib/python2.7/StringIO.py\nm UserDict /usr/lib/python2.7/UserDict.py\nm _LWPCookieJar /usr/lib/python2.7/_LWPCookieJar.py\nm _MozillaCookieJar /usr/lib/python2.7/_MozillaCookieJar.py\nm __builtin__ \nm __future__ /usr/lib/python2.7/__future__.py\nm __main__ /home/wcmckee/github/niketa/rgdsnatch.py\nm _abcoll /usr/lib/python2.7/_abcoll.py\nm _bisect \nm _cffi_backend /usr/lib/python2.7/dist-packages/_cffi_backend.arm-linux-gnueabihf.so\nm _codecs \nm _collections \nm _ctypes /usr/lib/python2.7/lib-dynload/_ctypes.arm-linux-gnueabihf.so\nm _functools \nm _hashlib /usr/lib/python2.7/lib-dynload/_hashlib.arm-linux-gnueabihf.so\nm _heapq \nm _io \nm _json /usr/lib/python2.7/lib-dynload/_json.arm-linux-gnueabihf.so\nm _locale \nm _md5 \nm _osx_support /usr/lib/python2.7/_osx_support.py\nm _random \nm _sha \nm _sha256 \nm _sha512 \nm _socket \nm _sre \nm _ssl /usr/lib/python2.7/lib-dynload/_ssl.arm-linux-gnueabihf.so\nm _struct \nm _sysconfigdata /usr/lib/python2.7/_sysconfigdata.py\nm _sysconfigdata_nd /usr/lib/python2.7/plat-arm-linux-gnueabihf/_sysconfigdata_nd.py\nm _threading_local /usr/lib/python2.7/_threading_local.py\nm _warnings \nm _weakref \nm _weakrefset /usr/lib/python2.7/_weakrefset.py\nm abc /usr/lib/python2.7/abc.py\nm array \nm atexit /usr/lib/python2.7/atexit.py\nP backports /home/wcmckee/.local/lib/python2.7/site-packages/backports/__init__.py\nP backports.ssl_match_hostname /home/wcmckee/.local/lib/python2.7/site-packages/backports/ssl_match_hostname/__init__.py\nm base64 /usr/lib/python2.7/base64.py\nm bdb /usr/lib/python2.7/bdb.py\nm binascii \nm bisect /usr/lib/python2.7/bisect.py\nm bz2 /usr/lib/python2.7/lib-dynload/bz2.arm-linux-gnueabihf.so\nm cPickle \nm cStringIO \nm calendar /usr/lib/python2.7/calendar.py\nP certifi /home/wcmckee/.local/lib/python2.7/site-packages/certifi/__init__.py\nm certifi.core /home/wcmckee/.local/lib/python2.7/site-packages/certifi/core.py\nP cffi /usr/lib/python2.7/dist-packages/cffi/__init__.py\nm cffi.api /usr/lib/python2.7/dist-packages/cffi/api.py\nm cffi.commontypes /usr/lib/python2.7/dist-packages/cffi/commontypes.py\nm cffi.cparser /usr/lib/python2.7/dist-packages/cffi/cparser.py\nm cffi.ffiplatform /usr/lib/python2.7/dist-packages/cffi/ffiplatform.py\nm cffi.gc_weakref /usr/lib/python2.7/dist-packages/cffi/gc_weakref.py\nm cffi.lock /usr/lib/python2.7/dist-packages/cffi/lock.py\nm cffi.model /usr/lib/python2.7/dist-packages/cffi/model.py\nm cffi.vengine_cpy /usr/lib/python2.7/dist-packages/cffi/vengine_cpy.py\nm cffi.vengine_gen /usr/lib/python2.7/dist-packages/cffi/vengine_gen.py\nm cffi.verifier /usr/lib/python2.7/dist-packages/cffi/verifier.py\nm cgi /usr/lib/python2.7/cgi.py\nm cmd /usr/lib/python2.7/cmd.py\nm codecs /usr/lib/python2.7/codecs.py\nm collections /usr/lib/python2.7/collections.py\nm configparser /usr/lib/python2.7/dist-packages/configparser.py\nm configparser_helpers /usr/lib/python2.7/dist-packages/configparser_helpers.py\nm contextlib /usr/lib/python2.7/contextlib.py\nm cookielib /usr/lib/python2.7/cookielib.py\nm copy /usr/lib/python2.7/copy.py\nm copy_reg /usr/lib/python2.7/copy_reg.py\nP cryptography /usr/lib/python2.7/dist-packages/cryptography/__init__.py\nm cryptography.__about__ /usr/lib/python2.7/dist-packages/cryptography/__about__.py\nP cryptography.hazmat /usr/lib/python2.7/dist-packages/cryptography/hazmat/__init__.py\nP cryptography.hazmat.bindings /usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/__init__.py\nP cryptography.hazmat.bindings.openssl /usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py\nm cryptography.hazmat.bindings.openssl.binding /usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py\nm cryptography.hazmat.bindings.utils /usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py\nP ctypes /usr/lib/python2.7/ctypes/__init__.py\nm ctypes._endian /usr/lib/python2.7/ctypes/_endian.py\nm ctypes.util /usr/lib/python2.7/ctypes/util.py\nm datetime \nm decimal /usr/lib/python2.7/decimal.py\nm difflib /usr/lib/python2.7/difflib.py\nm dis /usr/lib/python2.7/dis.py\nP distutils /usr/lib/python2.7/distutils/__init__.py\nm distutils.archive_util /usr/lib/python2.7/distutils/archive_util.py\nm distutils.cmd /usr/lib/python2.7/distutils/cmd.py\nP distutils.command /usr/lib/python2.7/distutils/command/__init__.py\nm distutils.config /usr/lib/python2.7/distutils/config.py\nm distutils.core /usr/lib/python2.7/distutils/core.py\nm distutils.debug /usr/lib/python2.7/distutils/debug.py\nm distutils.dep_util /usr/lib/python2.7/distutils/dep_util.py\nm distutils.dir_util /usr/lib/python2.7/distutils/dir_util.py\nm distutils.dist /usr/lib/python2.7/distutils/dist.py\nm distutils.errors /usr/lib/python2.7/distutils/errors.py\nm distutils.extension /usr/lib/python2.7/distutils/extension.py\nm distutils.fancy_getopt /usr/lib/python2.7/distutils/fancy_getopt.py\nm distutils.file_util /usr/lib/python2.7/distutils/file_util.py\nm distutils.log /usr/lib/python2.7/distutils/log.py\nm distutils.spawn /usr/lib/python2.7/distutils/spawn.py\nm distutils.sysconfig /usr/lib/python2.7/distutils/sysconfig.py\nm distutils.text_file /usr/lib/python2.7/distutils/text_file.py\nm distutils.util /usr/lib/python2.7/distutils/util.py\nm distutils.version /usr/lib/python2.7/distutils/version.py\nm distutils.versionpredicate /usr/lib/python2.7/distutils/versionpredicate.py\nm doctest /usr/lib/python2.7/doctest.py\nm dummy_thread /usr/lib/python2.7/dummy_thread.py\nm dummy_threading /usr/lib/python2.7/dummy_threading.py\nP email /usr/lib/python2.7/email/__init__.py\nm email._parseaddr /usr/lib/python2.7/email/_parseaddr.py\nm email.base64mime /usr/lib/python2.7/email/base64mime.py\nm email.charset /usr/lib/python2.7/email/charset.py\nm email.encoders /usr/lib/python2.7/email/encoders.py\nm email.errors /usr/lib/python2.7/email/errors.py\nm email.feedparser /usr/lib/python2.7/email/feedparser.py\nm email.generator /usr/lib/python2.7/email/generator.py\nm email.header /usr/lib/python2.7/email/header.py\nm email.iterators /usr/lib/python2.7/email/iterators.py\nm email.message /usr/lib/python2.7/email/message.py\nP email.mime /usr/lib/python2.7/email/mime/__init__.py\nm email.parser /usr/lib/python2.7/email/parser.py\nm email.quoprimime /usr/lib/python2.7/email/quoprimime.py\nm email.utils /usr/lib/python2.7/email/utils.py\nP encodings /usr/lib/python2.7/encodings/__init__.py\nm encodings.aliases /usr/lib/python2.7/encodings/aliases.py\nm errno \nm exceptions \nm fcntl \nm fnmatch /usr/lib/python2.7/fnmatch.py\nm ftplib /usr/lib/python2.7/ftplib.py\nm functools /usr/lib/python2.7/functools.py\nm gc \nm genericpath /usr/lib/python2.7/genericpath.py\nm getopt /usr/lib/python2.7/getopt.py\nm getpass /usr/lib/python2.7/getpass.py\nm gettext /usr/lib/python2.7/gettext.py\nm grp \nm gzip /usr/lib/python2.7/gzip.py\nm hashlib /usr/lib/python2.7/hashlib.py\nm heapq /usr/lib/python2.7/heapq.py\nm httplib /usr/lib/python2.7/httplib.py\nm imp \nP importlib /usr/lib/python2.7/importlib/__init__.py\nm inspect /usr/lib/python2.7/inspect.py\nm io /usr/lib/python2.7/io.py\nm itertools \nP json /usr/lib/python2.7/json/__init__.py\nm json.decoder /usr/lib/python2.7/json/decoder.py\nm json.encoder /usr/lib/python2.7/json/encoder.py\nm json.scanner /usr/lib/python2.7/json/scanner.py\nm keyword /usr/lib/python2.7/keyword.py\nm linecache /usr/lib/python2.7/linecache.py\nm locale /usr/lib/python2.7/locale.py\nP logging /usr/lib/python2.7/logging/__init__.py\nm marshal \nm math \nm md5 /usr/lib/python2.7/md5.py\nm mimetools /usr/lib/python2.7/mimetools.py\nm mimetypes /usr/lib/python2.7/mimetypes.py\nP ndg /usr/lib/pymodules/python2.7/ndg/__init__.py\nP ndg.httpsclient /usr/lib/pymodules/python2.7/ndg/httpsclient/__init__.py\nm ndg.httpsclient.ssl_peer_verification /usr/lib/pymodules/python2.7/ndg/httpsclient/ssl_peer_verification.py\nm ndg.httpsclient.subj_alt_name /usr/lib/pymodules/python2.7/ndg/httpsclient/subj_alt_name.py\nm netrc /usr/lib/python2.7/netrc.py\nm ntpath /usr/lib/python2.7/ntpath.py\nm nturl2path /usr/lib/python2.7/nturl2path.py\nm numbers /usr/lib/python2.7/numbers.py\nm opcode /usr/lib/python2.7/opcode.py\nm operator \nm optparse /usr/lib/python2.7/optparse.py\nm os /usr/lib/python2.7/os.py\nm os2emxpath /usr/lib/python2.7/os2emxpath.py\nm pdb /usr/lib/python2.7/pdb.py\nm pickle /usr/lib/python2.7/pickle.py\nm pkgutil /usr/lib/python2.7/pkgutil.py\nm platform /usr/lib/python2.7/platform.py\nm plistlib /usr/lib/python2.7/plistlib.py\nP ply /usr/lib/python2.7/dist-packages/ply/__init__.py\nm ply.lex /usr/lib/python2.7/dist-packages/ply/lex.py\nm ply.yacc /usr/lib/python2.7/dist-packages/ply/yacc.py\nm posix \nm posixpath /usr/lib/python2.7/posixpath.py\nm pprint /usr/lib/python2.7/pprint.py\nP praw /usr/local/lib/python2.7/dist-packages/praw/__init__.py\nm praw.decorators /usr/local/lib/python2.7/dist-packages/praw/decorators.py\nm praw.errors /usr/local/lib/python2.7/dist-packages/praw/errors.py\nm praw.handlers /usr/local/lib/python2.7/dist-packages/praw/handlers.py\nm praw.helpers /usr/local/lib/python2.7/dist-packages/praw/helpers.py\nm praw.internal /usr/local/lib/python2.7/dist-packages/praw/internal.py\nm praw.objects /usr/local/lib/python2.7/dist-packages/praw/objects.py\nm praw.settings /usr/local/lib/python2.7/dist-packages/praw/settings.py\nm pwd \nm py_compile /usr/lib/python2.7/py_compile.py\nP pyasn1 /usr/lib/python2.7/dist-packages/pyasn1/__init__.py\nP pyasn1.codec /usr/lib/python2.7/dist-packages/pyasn1/codec/__init__.py\nP pyasn1.codec.ber /usr/lib/python2.7/dist-packages/pyasn1/codec/ber/__init__.py\nm pyasn1.codec.ber.decoder /usr/lib/python2.7/dist-packages/pyasn1/codec/ber/decoder.py\nm pyasn1.codec.ber.eoo /usr/lib/python2.7/dist-packages/pyasn1/codec/ber/eoo.py\nP pyasn1.codec.cer /usr/lib/python2.7/dist-packages/pyasn1/codec/cer/__init__.py\nm pyasn1.codec.cer.decoder /usr/lib/python2.7/dist-packages/pyasn1/codec/cer/decoder.py\nP pyasn1.codec.der /usr/lib/python2.7/dist-packages/pyasn1/codec/der/__init__.py\nm pyasn1.codec.der.decoder /usr/lib/python2.7/dist-packages/pyasn1/codec/der/decoder.py\nP pyasn1.compat /usr/lib/python2.7/dist-packages/pyasn1/compat/__init__.py\nm pyasn1.compat.octets /usr/lib/python2.7/dist-packages/pyasn1/compat/octets.py\nm pyasn1.debug /usr/lib/python2.7/dist-packages/pyasn1/debug.py\nm pyasn1.error /usr/lib/python2.7/dist-packages/pyasn1/error.py\nP pyasn1.type /usr/lib/python2.7/dist-packages/pyasn1/type/__init__.py\nm pyasn1.type.base /usr/lib/python2.7/dist-packages/pyasn1/type/base.py\nm pyasn1.type.char /usr/lib/python2.7/dist-packages/pyasn1/type/char.py\nm pyasn1.type.constraint /usr/lib/python2.7/dist-packages/pyasn1/type/constraint.py\nm pyasn1.type.error /usr/lib/python2.7/dist-packages/pyasn1/type/error.py\nm pyasn1.type.namedtype /usr/lib/python2.7/dist-packages/pyasn1/type/namedtype.py\nm pyasn1.type.namedval /usr/lib/python2.7/dist-packages/pyasn1/type/namedval.py\nm pyasn1.type.tag /usr/lib/python2.7/dist-packages/pyasn1/type/tag.py\nm pyasn1.type.tagmap /usr/lib/python2.7/dist-packages/pyasn1/type/tagmap.py\nm pyasn1.type.univ /usr/lib/python2.7/dist-packages/pyasn1/type/univ.py\nm pyasn1.type.useful /usr/lib/python2.7/dist-packages/pyasn1/type/useful.py\nP pycparser /usr/lib/python2.7/dist-packages/pycparser/__init__.py\nm pycparser.ast_transforms /usr/lib/python2.7/dist-packages/pycparser/ast_transforms.py\nm pycparser.c_ast /usr/lib/python2.7/dist-packages/pycparser/c_ast.py\nm pycparser.c_lexer /usr/lib/python2.7/dist-packages/pycparser/c_lexer.py\nm pycparser.c_parser /usr/lib/python2.7/dist-packages/pycparser/c_parser.py\nm pycparser.plyparser /usr/lib/python2.7/dist-packages/pycparser/plyparser.py\nm pyexpat /usr/lib/python2.7/lib-dynload/pyexpat.arm-linux-gnueabihf.so\nm quopri /usr/lib/python2.7/quopri.py\nm random /usr/lib/python2.7/random.py\nm re /usr/lib/python2.7/re.py\nm readline /usr/lib/python2.7/lib-dynload/readline.arm-linux-gnueabihf.so\nm repr /usr/lib/python2.7/repr.py\nP requests /usr/local/lib/python2.7/dist-packages/requests/__init__.py\nm requests.adapters /usr/local/lib/python2.7/dist-packages/requests/adapters.py\nm requests.api /usr/local/lib/python2.7/dist-packages/requests/api.py\nm requests.auth /usr/local/lib/python2.7/dist-packages/requests/auth.py\nm requests.certs /usr/local/lib/python2.7/dist-packages/requests/certs.py\nm requests.compat /usr/local/lib/python2.7/dist-packages/requests/compat.py\nm requests.cookies /usr/local/lib/python2.7/dist-packages/requests/cookies.py\nm requests.exceptions /usr/local/lib/python2.7/dist-packages/requests/exceptions.py\nm requests.hooks /usr/local/lib/python2.7/dist-packages/requests/hooks.py\nm requests.models /usr/local/lib/python2.7/dist-packages/requests/models.py\nP requests.packages /usr/local/lib/python2.7/dist-packages/requests/packages/__init__.py\nP requests.packages.chardet /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/__init__.py\nm requests.packages.chardet.big5freq /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/big5freq.py\nm requests.packages.chardet.big5prober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/big5prober.py\nm requests.packages.chardet.chardistribution /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/chardistribution.py\nm requests.packages.chardet.charsetgroupprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/charsetgroupprober.py\nm requests.packages.chardet.charsetprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/charsetprober.py\nm requests.packages.chardet.codingstatemachine /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/codingstatemachine.py\nm requests.packages.chardet.compat /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/compat.py\nm requests.packages.chardet.constants /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/constants.py\nm requests.packages.chardet.cp949prober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/cp949prober.py\nm requests.packages.chardet.escprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/escprober.py\nm requests.packages.chardet.escsm /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/escsm.py\nm requests.packages.chardet.eucjpprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/eucjpprober.py\nm requests.packages.chardet.euckrfreq /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/euckrfreq.py\nm requests.packages.chardet.euckrprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/euckrprober.py\nm requests.packages.chardet.euctwfreq /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/euctwfreq.py\nm requests.packages.chardet.euctwprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/euctwprober.py\nm requests.packages.chardet.gb2312freq /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/gb2312freq.py\nm requests.packages.chardet.gb2312prober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/gb2312prober.py\nm requests.packages.chardet.hebrewprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/hebrewprober.py\nm requests.packages.chardet.jisfreq /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/jisfreq.py\nm requests.packages.chardet.jpcntx /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/jpcntx.py\nm requests.packages.chardet.langbulgarianmodel /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/langbulgarianmodel.py\nm requests.packages.chardet.langcyrillicmodel /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/langcyrillicmodel.py\nm requests.packages.chardet.langgreekmodel /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/langgreekmodel.py\nm requests.packages.chardet.langhebrewmodel /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/langhebrewmodel.py\nm requests.packages.chardet.langhungarianmodel /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/langhungarianmodel.py\nm requests.packages.chardet.langthaimodel /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/langthaimodel.py\nm requests.packages.chardet.latin1prober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/latin1prober.py\nm requests.packages.chardet.mbcharsetprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/mbcharsetprober.py\nm requests.packages.chardet.mbcsgroupprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/mbcsgroupprober.py\nm requests.packages.chardet.mbcssm /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/mbcssm.py\nm requests.packages.chardet.sbcharsetprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/sbcharsetprober.py\nm requests.packages.chardet.sbcsgroupprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/sbcsgroupprober.py\nm requests.packages.chardet.sjisprober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/sjisprober.py\nm requests.packages.chardet.universaldetector /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/universaldetector.py\nm requests.packages.chardet.utf8prober /usr/local/lib/python2.7/dist-packages/requests/packages/chardet/utf8prober.py\nP requests.packages.urllib3 /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/__init__.py\nm requests.packages.urllib3._collections /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.py\nm requests.packages.urllib3.connection /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py\nm requests.packages.urllib3.connectionpool /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py\nP requests.packages.urllib3.contrib /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/__init__.py\nm requests.packages.urllib3.contrib.pyopenssl /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py\nm requests.packages.urllib3.exceptions /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/exceptions.py\nm requests.packages.urllib3.fields /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/fields.py\nm requests.packages.urllib3.filepost /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/filepost.py\nP requests.packages.urllib3.packages /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/packages/__init__.py\nm requests.packages.urllib3.packages.ordered_dict /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/packages/ordered_dict.py\nm requests.packages.urllib3.packages.six /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/packages/six.py\nP requests.packages.urllib3.packages.ssl_match_hostname /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/packages/ssl_match_hostname/__init__.py\nm requests.packages.urllib3.packages.ssl_match_hostname._implementation /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/packages/ssl_match_hostname/_implementation.py\nm requests.packages.urllib3.poolmanager /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/poolmanager.py\nm requests.packages.urllib3.request /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/request.py\nm requests.packages.urllib3.response /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/response.py\nP requests.packages.urllib3.util /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/__init__.py\nm requests.packages.urllib3.util.connection /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/connection.py\nm requests.packages.urllib3.util.request /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/request.py\nm requests.packages.urllib3.util.response /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/response.py\nm requests.packages.urllib3.util.retry /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/retry.py\nm requests.packages.urllib3.util.ssl_ /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py\nm requests.packages.urllib3.util.timeout /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/timeout.py\nm requests.packages.urllib3.util.url /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/url.py\nm requests.sessions /usr/local/lib/python2.7/dist-packages/requests/sessions.py\nm requests.status_codes /usr/local/lib/python2.7/dist-packages/requests/status_codes.py\nm requests.structures /usr/local/lib/python2.7/dist-packages/requests/structures.py\nm requests.utils /usr/local/lib/python2.7/dist-packages/requests/utils.py\nm rfc822 /usr/lib/python2.7/rfc822.py\nm select \nm shlex /usr/lib/python2.7/shlex.py\nm shutil /usr/lib/python2.7/shutil.py\nm signal \nP simplejson /usr/lib/python2.7/dist-packages/simplejson/__init__.py\nm simplejson._speedups /usr/lib/python2.7/dist-packages/simplejson/_speedups.arm-linux-gnueabihf.so\nm simplejson.compat /usr/lib/python2.7/dist-packages/simplejson/compat.py\nm simplejson.decoder /usr/lib/python2.7/dist-packages/simplejson/decoder.py\nm simplejson.encoder /usr/lib/python2.7/dist-packages/simplejson/encoder.py\nm simplejson.ordered_dict /usr/lib/python2.7/dist-packages/simplejson/ordered_dict.py\nm simplejson.scanner /usr/lib/python2.7/dist-packages/simplejson/scanner.py\nm six /usr/local/lib/python2.7/dist-packages/six.py\nm socket /usr/lib/python2.7/socket.py\nm sre_compile /usr/lib/python2.7/sre_compile.py\nm sre_constants /usr/lib/python2.7/sre_constants.py\nm sre_parse /usr/lib/python2.7/sre_parse.py\nm ssl /usr/lib/python2.7/ssl.py\nm stat /usr/lib/python2.7/stat.py\nm string /usr/lib/python2.7/string.py\nm strop \nm struct /usr/lib/python2.7/struct.py\nm subprocess /usr/lib/python2.7/subprocess.py\nm sys \nm tarfile /usr/lib/python2.7/tarfile.py\nm tempfile /usr/lib/python2.7/tempfile.py\nm termios /usr/lib/python2.7/lib-dynload/termios.arm-linux-gnueabihf.so\nm textwrap /usr/lib/python2.7/textwrap.py\nm thread \nm threading /usr/lib/python2.7/threading.py\nm time \nm timeit /usr/lib/python2.7/timeit.py\nm token /usr/lib/python2.7/token.py\nm tokenize /usr/lib/python2.7/tokenize.py\nm traceback /usr/lib/python2.7/traceback.py\nm types /usr/lib/python2.7/types.py\nP unittest /usr/lib/python2.7/unittest/__init__.py\nm unittest.case /usr/lib/python2.7/unittest/case.py\nm unittest.loader /usr/lib/python2.7/unittest/loader.py\nm unittest.main /usr/lib/python2.7/unittest/main.py\nm unittest.result /usr/lib/python2.7/unittest/result.py\nm unittest.runner /usr/lib/python2.7/unittest/runner.py\nm unittest.signals /usr/lib/python2.7/unittest/signals.py\nm unittest.suite /usr/lib/python2.7/unittest/suite.py\nm unittest.util /usr/lib/python2.7/unittest/util.py\nm update_checker /usr/local/lib/python2.7/dist-packages/update_checker.py\nm urllib /usr/lib/python2.7/urllib.py\nm urllib2 /usr/lib/python2.7/urllib2.py\nm urlparse /usr/lib/python2.7/urlparse.py\nm uu /usr/lib/python2.7/uu.py\nm uuid /usr/lib/python2.7/uuid.py\nm warnings /usr/lib/python2.7/warnings.py\nm weakref /usr/lib/python2.7/weakref.py\nP xml /usr/lib/python2.7/xml/__init__.py\nP xml.parsers /usr/lib/python2.7/xml/parsers/__init__.py\nm xml.parsers.expat /usr/lib/python2.7/xml/parsers/expat.py\nm zipfile /usr/lib/python2.7/zipfile.py\nm zipimport \nm zlib \n\nMissing modules:\n? Carbon imported from plistlib\n? Carbon.File imported from plistlib\n? Carbon.Files imported from plistlib\n? EasyDialogs imported from getpass\n? MacOS imported from platform\n? SOCKS imported from ftplib\n? _dummy_thread imported from cffi.lock, configparser_helpers\n? _dummy_threading imported from dummy_threading\n? _emx_link imported from os\n? _scproxy imported from urllib\n? _subprocess imported from subprocess\n? _sysconfigdata_d imported from _sysconfigdata\n? _thread imported from cffi.cparser, cffi.lock, configparser_helpers\n? _winreg imported from mimetypes, platform, urllib\n? _xmlplus imported from xml\n? builtins imported from requests.packages.urllib3.packages.six\n? ce imported from os\n? cffi._pycparser imported from -\n? ctypes.macholib.dyld imported from ctypes.util\n? gestalt imported from platform\n? http imported from requests.compat\n? http.client imported from requests.packages.urllib3.connection\n? http.cookies imported from requests.compat\n? importlib.reload imported from simplejson.compat\n? java.lang imported from platform\n? msvcrt imported from getpass, subprocess\n? netbios imported from uuid\n? nt imported from ntpath, os\n? ordereddict imported from configparser\n? org.python.core imported from copy, pickle\n? os.path imported from cffi.ffiplatform, distutils.file_util, os, pkgutil, ply.lex, ply.yacc, requests.certs, shlex, shutil\n? os2 imported from os\n? packages.ssl_match_hostname.CertificateError imported from requests.packages.urllib3.connectionpool\n? packages.ssl_match_hostname.match_hostname imported from requests.packages.urllib3.connection\n? packages.urllib3.Retry imported from requests.adapters\n? packages.urllib3.util.Timeout imported from requests.adapters\n? packages.urllib3.util.parse_url imported from requests.models\n? queue imported from requests.packages.urllib3.connectionpool\n? riscos imported from os\n? riscosenviron imported from os\n? riscospath imported from os\n? rourl2path imported from urllib\n? six.moves.urllib.parse imported from praw, praw.objects\n? urllib.parse imported from requests.compat, requests.packages.urllib3.poolmanager, requests.packages.urllib3.request\n? urllib.request imported from requests.compat\n? vms_lib imported from platform\n? win32api imported from platform\n? win32con imported from platform\n? win32pipe imported from platform\n? win32wnet imported from uuid\n"
],
[
"runpy.run_module('requests')",
"_____no_output_____"
],
[
"nbog = raw_input('Name of notebook to tag: ')",
"Name of notebook to tag: wcmnikolatags\n"
],
[
"files = file_paths(filtered_walk('/home/wcmckee/github/', depth=100, included_files=[nbog + '.ipynb']))",
"_____no_output_____"
],
[
"#Easier to access ipynb and access the code import cell.\n#Parse the notebook for import tags\n#IPython module wrapper for returning list of imported modules\n#from a ipynb file. \n#Get the list of modules for tags from this.",
"_____no_output_____"
],
[
"for fil in files:\n #print fil\n opfil = open(fil, 'r')\n print opfil.read()\n opfil.close()",
"{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"wcm Nikola Tags\\n\",\n \"\\n\",\n \"convert ipynb/py doc imports as tags for nikola blog .meta files.\\n\",\n \"\\n\",\n \"When user searches for notebook to blog with bbknikola python script also get the tags for the .meta file. Open up the .py file and convert this:\\n\",\n \"\\n\",\n \"blogpost.py\\n\",\n \"\\n\",\n \"import requests\\n\",\n \"import os\\n\",\n \"import re\\n\",\n \"\\n\",\n \"Into:\\n\",\n \"\\n\",\n \"blogpost.meta\\n\",\n \"\\n\",\n \"blogpost\\n\",\n \"blogpost\\n\",\n \"2015/02/31 00:00:00\\n\",\n \"requests, os, re\"\n ]\n },\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"Categorie can be the name of the repo.\\n\",\n \"Need a repo for Nikola scripts\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 20,\n \"metadata\": {\n \"collapsed\": false\n },\n \"outputs\": [],\n \"source\": [\n \"import os\\n\",\n \"from walkdir import filtered_walk, dir_paths, all_paths, file_paths\\n\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 21,\n \"metadata\": {\n \"collapsed\": false\n },\n \"outputs\": [\n {\n \"name\": \"stdout\",\n \"output_type\": \"stream\",\n \"text\": [\n \"Name of notebook to tag: wcmnikolatags\\n\"\n ]\n }\n ],\n \"source\": [\n \"nbog = raw_input('Name of notebook to tag: ')\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 22,\n \"metadata\": {\n \"collapsed\": false\n },\n \"outputs\": [],\n \"source\": [\n \"files = file_paths(filtered_walk('/home/wcmckee/github/', depth=100, included_files=[nbog + '.py']))\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 23,\n \"metadata\": {\n \"collapsed\": false\n },\n \"outputs\": [\n {\n \"name\": \"stdout\",\n \"output_type\": \"stream\",\n \"text\": [\n \"<generator object file_paths at 0x97ece14>\\n\"\n ]\n }\n ],\n \"source\": [\n \"print files\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 24,\n \"metadata\": {\n \"collapsed\": false,\n \"scrolled\": true\n },\n \"outputs\": [\n {\n \"name\": \"stdout\",\n \"output_type\": \"stream\",\n \"text\": [\n \"\\n\",\n \"# coding: utf-8\\n\",\n \"\\n\",\n \"# wcm Nikola Tags\\n\",\n \"# \\n\",\n \"# convert ipynb/py doc imports as tags for nikola blog .meta files.\\n\",\n \"# \\n\",\n \"# When user searches for notebook to blog with bbknikola python script also get the tags for the .meta file. Open up the .py file and convert this:\\n\",\n \"# \\n\",\n \"# blogpost.py\\n\",\n \"# \\n\",\n \"# import requests\\n\",\n \"# import os\\n\",\n \"# import re\\n\",\n \"# \\n\",\n \"# Into:\\n\",\n \"# \\n\",\n \"# blogpost.meta\\n\",\n \"# \\n\",\n \"# blogpost\\n\",\n \"# blogpost\\n\",\n \"# 2015/02/31 00:00:00\\n\",\n \"# requests, os, re\\n\",\n \"\\n\",\n \"# Categorie can be the name of the repo.\\n\",\n \"# Need a repo for Nikola scripts\\n\",\n \"\\n\",\n \"# In[13]:\\n\",\n \"\\n\",\n \"import os\\n\",\n \"from walkdir import filtered_walk, dir_paths, all_paths, file_paths\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[14]:\\n\",\n \"\\n\",\n \"nbog = raw_input('Name of notebook to tag: ')\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[15]:\\n\",\n \"\\n\",\n \"files = file_paths(filtered_walk('/home/wcmckee/github/', depth=100, included_files=[nbog + '.py']))\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[16]:\\n\",\n \"\\n\",\n \"print files\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[17]:\\n\",\n \"\\n\",\n \"for fil in files:\\n\",\n \" #print fil\\n\",\n \" opfil = open(fil, 'r')\\n\",\n \" print opfil.read()\\n\",\n \" opfil.close()\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[19]:\\n\",\n \"\\n\",\n \"opfilz = open(fil, 'r')\\n\",\n \"\\n\",\n \"opfilz.read()\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[1]:\\n\",\n \"\\n\",\n \"#python module that can be used to scan .py files and return\\n\",\n \"#the imports as a list.\\n\",\n \"\\n\",\n \"\\n\",\n \"# In[ ]:\\n\",\n \"\\n\",\n \"\\n\",\n \"\\n\",\n \"\\n\"\n ]\n }\n ],\n \"source\": [\n \"for fil in files:\\n\",\n \" #print fil\\n\",\n \" opfil = open(fil, 'r')\\n\",\n \" print opfil.read()\\n\",\n \" opfil.close()\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 27,\n \"metadata\": {\n \"collapsed\": false,\n \"scrolled\": true\n },\n \"outputs\": [\n {\n \"data\": {\n \"text/plain\": [\n \"\\\"\\\\n# coding: utf-8\\\\n\\\\n# wcm Nikola Tags\\\\n# \\\\n# convert ipynb/py doc imports as tags for nikola blog .meta files.\\\\n# \\\\n# When user searches for notebook to blog with bbknikola python script also get the tags for the .meta file. Open up the .py file and convert this:\\\\n# \\\\n# blogpost.py\\\\n# \\\\n# import requests\\\\n# import os\\\\n# import re\\\\n# \\\\n# Into:\\\\n# \\\\n# blogpost.meta\\\\n# \\\\n# blogpost\\\\n# blogpost\\\\n# 2015/02/31 00:00:00\\\\n# requests, os, re\\\\n\\\\n# Categorie can be the name of the repo.\\\\n# Need a repo for Nikola scripts\\\\n\\\\n# In[13]:\\\\n\\\\nimport os\\\\nfrom walkdir import filtered_walk, dir_paths, all_paths, file_paths\\\\n\\\\n\\\\n# In[14]:\\\\n\\\\nnbog = raw_input('Name of notebook to tag: ')\\\\n\\\\n\\\\n# In[15]:\\\\n\\\\nfiles = file_paths(filtered_walk('/home/wcmckee/github/', depth=100, included_files=[nbog + '.py']))\\\\n\\\\n\\\\n# In[16]:\\\\n\\\\nprint files\\\\n\\\\n\\\\n# In[17]:\\\\n\\\\nfor fil in files:\\\\n #print fil\\\\n opfil = open(fil, 'r')\\\\n print opfil.read()\\\\n opfil.close()\\\\n\\\\n\\\\n# In[19]:\\\\n\\\\nopfilz = open(fil, 'r')\\\\n\\\\nopfilz.read()\\\\n\\\\n\\\\n# In[1]:\\\\n\\\\n#python module that can be used to scan .py files and return\\\\n#the imports as a list.\\\\n\\\\n\\\\n# In[ ]:\\\\n\\\\n\\\\n\\\\n\\\"\"\n ]\n },\n \"execution_count\": 27,\n \"metadata\": {},\n \"output_type\": \"execute_result\"\n }\n ],\n \"source\": [\n \"opfilz = open(fil, 'r')\\n\",\n \"\\n\",\n \"opfilz.read()\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": 26,\n \"metadata\": {\n \"collapsed\": true\n },\n \"outputs\": [],\n \"source\": [\n \"#python module that can be used to scan .py files and return\\n\",\n \"#the imports as a list.\"\n ]\n },\n {\n \"cell_type\": \"code\",\n \"execution_count\": null,\n \"metadata\": {\n \"collapsed\": true\n },\n \"outputs\": [],\n \"source\": []\n }\n ],\n \"metadata\": {\n \"kernelspec\": {\n \"display_name\": \"Python 2\",\n \"language\": \"python\",\n \"name\": \"python2\"\n },\n \"language_info\": {\n \"codemirror_mode\": {\n \"name\": \"ipython\",\n \"version\": 2\n },\n \"file_extension\": \".py\",\n \"mimetype\": \"text/x-python\",\n \"name\": \"python\",\n \"nbconvert_exporter\": \"python\",\n \"pygments_lexer\": \"ipython2\",\n \"version\": \"2.7.3\"\n }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n\n"
],
[
"opfilz = open(fil, 'r')\n\nopfilz.read()",
"_____no_output_____"
],
[
"#python module that can be used to scan .py files and return\n#the imports as a list.",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e734f1dc431dbe23d0cfec768e11e5e2b1816db2 | 1,898 | ipynb | Jupyter Notebook | categorical_to_OneHotEncoding.ipynb | shreya996/tensorflow_tensorboard | 77be92d2fd8bb8697f639185e3234c2dc5c00506 | [
"MIT"
] | null | null | null | categorical_to_OneHotEncoding.ipynb | shreya996/tensorflow_tensorboard | 77be92d2fd8bb8697f639185e3234c2dc5c00506 | [
"MIT"
] | null | null | null | categorical_to_OneHotEncoding.ipynb | shreya996/tensorflow_tensorboard | 77be92d2fd8bb8697f639185e3234c2dc5c00506 | [
"MIT"
] | null | null | null | 19.978947 | 93 | 0.513172 | [
[
[
"\nimport numpy as np\nvalues=np.load(\"y.npy\") #y.npy with categorical data\nprint(values)\n ",
"['var13' 'var13' 'var13' ... 'var1' 'var1' 'var1']\n"
],
[
"#conversion of categorical data(labels) to integer encoding then to one-hot encoding\n\nfrom sklearn.preprocessing import OneHotEncoder,LabelEncoder\n\nenc = OneHotEncoder(handle_unknown='ignore')\n\nlben=LabelEncoder()\nvalues=lben.fit_transform(values)\nvalues1=values.reshape(-1,1)\nprint(values1.shape)\nenc.fit(values1)\nonehotlabels = enc.transform(values1).toarray()\nonehotlabels\nnp.save(\"y_onehot.npy\",onehotlabels)\n",
"(45630, 1)\n"
],
[
"j=np.load(\"y_onehot.npy\")\nprint(j.shape)",
"(45630, 7)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e734f5de4ba8da8cbc9fbabcfc7b0973db13aa9e | 121,114 | ipynb | Jupyter Notebook | 01-intro-to-deep-learning/Task2-checkpoint.ipynb | AaronJH3/intro-to-deep-learning | 7ec84aa85932831ccb7c579b98596cbc2a187e53 | [
"Unlicense"
] | null | null | null | 01-intro-to-deep-learning/Task2-checkpoint.ipynb | AaronJH3/intro-to-deep-learning | 7ec84aa85932831ccb7c579b98596cbc2a187e53 | [
"Unlicense"
] | null | null | null | 01-intro-to-deep-learning/Task2-checkpoint.ipynb | AaronJH3/intro-to-deep-learning | 7ec84aa85932831ccb7c579b98596cbc2a187e53 | [
"Unlicense"
] | null | null | null | 186.616333 | 17,656 | 0.871559 | [
[
[
"Use Keras to build 3 networks, each with at least 10 hidden layers such that:\n\n* The first model has fewer than 10 nodes per layer.\n* The second model has between 10-50 nodes per layer.\n* The third model has between 50-100 nodes per layer.\n\nThen, answer these questions: \n\n* Did any of these models achieve better than 20% accuracy on validation or test data?\n * State a hypothesis about why these networks performed the way they did.\n * *An answer to this question is given in a notebook from the next section [01-activations](/02-training-and-regularization-tactics/01-activations.ipynb)*\n* How many total trainable parameters do each of these models have?\n* Is there a clear correlation between number of trainable parameters and accuracy?\n * Consider your results from part one in answering this question.\n",
"_____no_output_____"
]
],
[
[
"# For drawing the MNIST digits as well as plots to help us evaluate performance we\n# will make extensive use of matplotlib\nfrom matplotlib import pyplot as plt\n\n# All of the Keras datasets are in keras.datasets\nfrom tensorflow.keras.datasets import mnist\n\n#Allows us to flatten 2d given data\nfrom tensorflow.keras.utils import to_categorical\n\n# Keras has already split the data into training and test data\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n\n# 28 x 28 = 784, because that's the dimensions of the MNIST data.\nimage_size = 784\n\n# Reshaping the training_images and test_images to lists of vectors with length 784\n# instead of lists of 2D arrays. Same for the test_images\ntraining_data = training_images.reshape(training_images.shape[0], image_size) \ntest_data = test_images.reshape(test_images.shape[0], image_size)\n\n# Create 1-hot encoded vectors using to_categorical\nnum_classes = 10 # Because it's how many digits we have (0-9) \n\n# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors\ntraining_labels = to_categorical(training_labels, num_classes)\ntest_labels = to_categorical(test_labels, num_classes)\n\nprint(\"training data: \", training_images.shape, \" ==> \", training_data.shape)\nprint(\"test data: \", test_images.shape, \" ==> \", test_data.shape)\n",
"training data: (60000, 28, 28) ==> (60000, 784)\ntest data: (10000, 28, 28) ==> (10000, 784)\n"
],
[
"\n#Use Keras to build 3 networks, each with at least 10 hidden layers such that:\n\n#The first model has fewer than 10 nodes per layer.\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\n\n# Sequential models are a series of layers applied linearly.\nmodel1 = Sequential()\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel1.add(Dense(units=10, activation='sigmoid', input_shape=(image_size,)))\n\n# This is how the output layer gets added, the 'softmax' activation function ensures\n# that the sum of the values in the output nodes is 1. Softmax is very\n# common in classification networks. \nmodel1.add(Dense(units=num_classes, activation='softmax', input_shape=(image_size,)))\n\n# This function provides useful text data for our network\nmodel1.summary()\n\n#----\n\n# The second model has between 10-50 nodes per layer.\n\n# Sequential models are a series of layers applied linearly.\nmodel2 = Sequential()\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel2.add(Dense(units=50, activation='sigmoid', input_shape=(image_size,)))\n\n# This is how the output layer gets added, the 'softmax' activation function ensures\n# that the sum of the values in the output nodes is 1. Softmax is very\n# common in classification networks. \nmodel2.add(Dense(units=num_classes, activation='softmax', input_shape=(image_size,)))\n\n# This function provides useful text data for our network\nmodel2.summary()\n\n#----\n\n# The third model has between 50-100 nodes per layer.\n\n# Sequential models are a series of layers applied linearly.\nmodel3 = Sequential()\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n#This defines the layer itself. 400x784 (size of data)\nmodel3.add(Dense(units=100, activation='sigmoid', input_shape=(image_size,)))\n\n# This is how the output layer gets added, the 'softmax' activation function ensures\n# that the sum of the values in the output nodes is 1. Softmax is very\n# common in classification networks. \nmodel3.add(Dense(units=num_classes, activation='softmax', input_shape=(image_size,)))\n\n# This function provides useful text data for our network\nmodel3.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_10 (Dense) (None, 10) 7850 \n_________________________________________________________________\ndense_11 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_12 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_13 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_14 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_15 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_16 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_17 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_18 (Dense) (None, 10) 110 \n_________________________________________________________________\ndense_19 (Dense) (None, 10) 110 \n=================================================================\nTotal params: 8,840\nTrainable params: 8,840\nNon-trainable params: 0\n_________________________________________________________________\nModel: \"sequential_2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_20 (Dense) (None, 50) 39250 \n_________________________________________________________________\ndense_21 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_22 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_23 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_24 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_25 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_26 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_27 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_28 (Dense) (None, 50) 2550 \n_________________________________________________________________\ndense_29 (Dense) (None, 10) 510 \n=================================================================\nTotal params: 60,160\nTrainable params: 60,160\nNon-trainable params: 0\n_________________________________________________________________\nModel: \"sequential_3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_30 (Dense) (None, 100) 78500 \n_________________________________________________________________\ndense_31 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_32 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_33 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_34 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_35 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_36 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_37 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_38 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_39 (Dense) (None, 10) 1010 \n=================================================================\nTotal params: 160,310\nTrainable params: 160,310\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# sgd stands for stochastic gradient descent.\n# categorical_crossentropy is a common loss function used for categorical classification.\n# accuracy is the percent of predictions that were correct.\nmodel1.compile(optimizer=\"sgd\", loss='categorical_crossentropy', metrics=['accuracy'])\n\n# The network will make predictions for 128 flattened images per correction.\n# It will make a prediction on each item in the training set 5 times (5 epochs)\n# And 10% of the data will be used as validation data.\nhistory1 = model1.fit(training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1)\n\n\n#----\n\n# sgd stands for stochastic gradient descent.\n# categorical_crossentropy is a common loss function used for categorical classification.\n# accuracy is the percent of predictions that were correct.\nmodel2.compile(optimizer=\"sgd\", loss='categorical_crossentropy', metrics=['accuracy'])\n\n# The network will make predictions for 128 flattened images per correction.\n# It will make a prediction on each item in the training set 5 times (5 epochs)\n# And 10% of the data will be used as validation data.\nhistory2 = model2.fit(training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1)\n\n\n#---\n\n# sgd stands for stochastic gradient descent.\n# categorical_crossentropy is a common loss function used for categorical classification.\n# accuracy is the percent of predictions that were correct.\nmodel3.compile(optimizer=\"sgd\", loss='categorical_crossentropy', metrics=['accuracy'])\n\n# The network will make predictions for 128 flattened images per correction.\n# It will make a prediction on each item in the training set 5 times (5 epochs)\n# And 10% of the data will be used as validation data.\nhistory3 = model3.fit(training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1)\n",
"Train on 54000 samples, validate on 6000 samples\nEpoch 1/10\n54000/54000 [==============================] - 8s 143us/sample - loss: 2.3437 - accuracy: 0.0976 - val_loss: 2.3077 - val_accuracy: 0.1113\nEpoch 2/10\n54000/54000 [==============================] - 5s 94us/sample - loss: 2.3037 - accuracy: 0.1092 - val_loss: 2.3021 - val_accuracy: 0.1050\nEpoch 3/10\n54000/54000 [==============================] - 5s 92us/sample - loss: 2.3014 - accuracy: 0.1132 - val_loss: 2.3019 - val_accuracy: 0.1050\nEpoch 4/10\n54000/54000 [==============================] - 5s 92us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3019 - val_accuracy: 0.1050\nEpoch 5/10\n54000/54000 [==============================] - 6s 109us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 6/10\n54000/54000 [==============================] - 5s 92us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3019 - val_accuracy: 0.1050\nEpoch 7/10\n54000/54000 [==============================] - 5s 85us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3019 - val_accuracy: 0.1050\nEpoch 8/10\n54000/54000 [==============================] - 5s 89us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 9/10\n54000/54000 [==============================] - 5s 88us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 10/10\n54000/54000 [==============================] - 5s 95us/sample - loss: 2.3012 - accuracy: 0.1132 - val_loss: 2.3020 - val_accuracy: 0.105012 - accuracy: - ETA: 0s - loss: 2.3012 - accuracy: 0.\nTrain on 54000 samples, validate on 6000 samples\nEpoch 1/10\n54000/54000 [==============================] - 8s 157us/sample - loss: 2.3270 - accuracy: 0.1119 - val_loss: 2.3021 - val_accuracy: 0.1050\nEpoch 2/10\n54000/54000 [==============================] - 7s 125us/sample - loss: 2.3014 - accuracy: 0.1132 - val_loss: 2.3023 - val_accuracy: 0.1050\nEpoch 3/10\n54000/54000 [==============================] - 7s 124us/sample - loss: 2.3015 - accuracy: 0.1129 - val_loss: 2.3015 - val_accuracy: 0.1050\nEpoch 4/10\n54000/54000 [==============================] - 6s 113us/sample - loss: 2.3014 - accuracy: 0.1132 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 5/10\n54000/54000 [==============================] - 6s 111us/sample - loss: 2.3015 - accuracy: 0.1132 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 6/10\n54000/54000 [==============================] - 6s 119us/sample - loss: 2.3015 - accuracy: 0.1132 - val_loss: 2.3021 - val_accuracy: 0.1050\nEpoch 7/10\n54000/54000 [==============================] - 6s 114us/sample - loss: 2.3013 - accuracy: 0.1131 - val_loss: 2.3025 - val_accuracy: 0.1050\nEpoch 8/10\n54000/54000 [==============================] - 6s 115us/sample - loss: 2.3014 - accuracy: 0.1132 - val_loss: 2.3018 - val_accuracy: 0.1050\nEpoch 9/10\n54000/54000 [==============================] - 6s 118us/sample - loss: 2.3014 - accuracy: 0.1128 - val_loss: 2.3021 - val_accuracy: 0.1050\nEpoch 10/10\n54000/54000 [==============================] - 6s 115us/sample - loss: 2.3014 - accuracy: 0.1132 - val_loss: 2.3019 - val_accuracy: 0.1050\nTrain on 54000 samples, validate on 6000 samples\nEpoch 1/10\n54000/54000 [==============================] - 12s 221us/sample - loss: 2.3104 - accuracy: 0.1104 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 2/10\n54000/54000 [==============================] - 10s 184us/sample - loss: 2.3016 - accuracy: 0.1124 - val_loss: 2.3023 - val_accuracy: 0.1050\nEpoch 3/10\n54000/54000 [==============================] - 11s 195us/sample - loss: 2.3016 - accuracy: 0.1126 - val_loss: 2.3018 - val_accuracy: 0.1050\nEpoch 4/10\n54000/54000 [==============================] - 10s 178us/sample - loss: 2.3016 - accuracy: 0.1130 - val_loss: 2.3020 - val_accuracy: 0.1050\nEpoch 5/10\n54000/54000 [==============================] - 11s 207us/sample - loss: 2.3015 - accuracy: 0.1124 - val_loss: 2.3025 - val_accuracy: 0.1050\nEpoch 6/10\n54000/54000 [==============================] - 11s 198us/sample - loss: 2.3017 - accuracy: 0.1130 - val_loss: 2.3021 - val_accuracy: 0.1050\nEpoch 7/10\n54000/54000 [==============================] - 9s 174us/sample - loss: 2.3016 - accuracy: 0.1117 - val_loss: 2.3024 - val_accuracy: 0.1050\nEpoch 8/10\n54000/54000 [==============================] - 10s 180us/sample - loss: 2.3017 - accuracy: 0.1129 - val_loss: 2.3037 - val_accuracy: 0.1050\nEpoch 9/10\n54000/54000 [==============================] - 10s 189us/sample - loss: 2.3017 - accuracy: 0.1129 - val_loss: 2.3032 - val_accuracy: 0.1050\nEpoch 10/10\n54000/54000 [==============================] - 10s 182us/sample - loss: 2.3017 - accuracy: 0.1126 - val_loss: 2.3030 - val_accuracy: 0.1050\n"
],
[
"loss, accuracy = model1.evaluate(test_data, test_labels, verbose=True)\n\nplt.plot(history1.history['accuracy'])\nplt.plot(history1.history['val_accuracy'])\nplt.title('model1 accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nplt.plot(history1.history['loss'])\nplt.plot(history1.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nprint(f'Test loss: {loss:.3}')\nprint(f'Test accuracy: {accuracy:.3}')\n\n#---------\n\nloss, accuracy = model2.evaluate(test_data, test_labels, verbose=True)\n\nplt.plot(history2.history['accuracy'])\nplt.plot(history2.history['val_accuracy'])\nplt.title('model2 accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nplt.plot(history2.history['loss'])\nplt.plot(history2.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nprint(f'Test loss: {loss:.3}')\nprint(f'Test accuracy: {accuracy:.3}')\n\n#---------\n\nloss, accuracy = model3.evaluate(test_data, test_labels, verbose=True)\n\nplt.plot(history3.history['accuracy'])\nplt.plot(history3.history['val_accuracy'])\nplt.title('model3 accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nplt.plot(history3.history['loss'])\nplt.plot(history3.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nprint(f'Test loss: {loss:.3}')\nprint(f'Test accuracy: {accuracy:.3}')\n\n#---------",
"10000/10000 [==============================] - 2s 164us/sample - loss: 2.3010 - accuracy: 0.1135\n"
]
],
[
[
"* Did any of these models achieve better than 20% accuracy on validation or test data?\n\nNo :(\n * State a hypothesis about why these networks performed the way they did.\n \n Since we are only keeping 10% of our guesses iteratively for each \n 10 - 50 - 100 or so guesses. It implies that the machine wasn't\n allowed enough guesses to correct itself.\n \n * *An answer to this question is given in a notebook from the next section [01-activations](/02-training-and-regularization-tactics/01-activations.ipynb)*\n* How many total trainable parameters do each of these models have?\n\nTrainable params: 8,840\nTrainable params: 60,160\nTrainable params: 160,310\n\n* Is there a clear correlation between number of trainable parameters and accuracy?\n\nNo, all of them seem to have 10% accuracy \n\n * Consider your results from part one in answering this question.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e734f9b7aee3c9c2f89c600e741774dcc5df83dc | 21,435 | ipynb | Jupyter Notebook | nvidia_keras_mnist.ipynb | haricash/cnn-resources | 45c133092c542437d6a4868cb5e0be7c8907d7cb | [
"MIT"
] | null | null | null | nvidia_keras_mnist.ipynb | haricash/cnn-resources | 45c133092c542437d6a4868cb5e0be7c8907d7cb | [
"MIT"
] | null | null | null | nvidia_keras_mnist.ipynb | haricash/cnn-resources | 45c133092c542437d6a4868cb5e0be7c8907d7cb | [
"MIT"
] | null | null | null | 46.80131 | 4,986 | 0.477303 | [
[
[
"<a href=\"https://colab.research.google.com/github/haricash/cnn-resources/blob/main/nvidia_keras_mnist.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.datasets import mnist",
"_____no_output_____"
],
[
"(x_train, y_train), (x_valid, y_valid) = mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n11501568/11490434 [==============================] - 0s 0us/step\n"
],
[
"x_train.shape",
"_____no_output_____"
],
[
"x_train[30000]",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n",
"_____no_output_____"
],
[
"image = x_train[30000]\nplt.imshow(image, cmap='gray')",
"_____no_output_____"
],
[
"x_train = x_train.reshape(60000, 784)\nx_valid = x_valid.reshape(10000, 784)",
"_____no_output_____"
],
[
"# Normalizing the data. More on this later\nx_train = x_train/255\nx_valid = x_valid/255",
"_____no_output_____"
]
],
[
[
"### Categorically encoding values",
"_____no_output_____"
]
],
[
[
"import tensorflow.keras as keras\nnum_categories = 10",
"_____no_output_____"
],
[
"y_train = keras.utils.to_categorical(y_train, num_categories)\ny_valid = keras.utils.to_categorical(y_valid, num_categories)",
"_____no_output_____"
],
[
"from tensorflow.keras.models import Sequential\n# This instantiates the model type\nmodel = Sequential()",
"_____no_output_____"
],
[
"# This adds layers to the model\nfrom tensorflow.keras.layers import Dense\nmodel.add(Dense(units=512, activation='relu', input_shape=(784,)))",
"_____no_output_____"
],
[
"model.add(Dense(units=512, activation='relu'))",
"_____no_output_____"
],
[
"model.add(Dense(units=10, activation='softmax'))",
"_____no_output_____"
],
[
"# Summarising the model\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n dense (Dense) (None, 512) 401920 \n \n dense_1 (Dense) (None, 512) 262656 \n \n dense_2 (Dense) (None, 10) 5130 \n \n=================================================================\nTotal params: 669,706\nTrainable params: 669,706\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"model.compile(loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"from sys import version\nhistory = model.fit(x_train, y_train,\n epochs = 5,\n verbose = 1,\n validation_data=(x_valid,y_valid)\n )",
"Epoch 1/5\n1875/1875 [==============================] - 24s 12ms/step - loss: 0.1894 - accuracy: 0.9442 - val_loss: 0.1023 - val_accuracy: 0.9726\nEpoch 2/5\n1875/1875 [==============================] - 24s 13ms/step - loss: 0.1013 - accuracy: 0.9742 - val_loss: 0.1184 - val_accuracy: 0.9729\nEpoch 3/5\n1875/1875 [==============================] - 24s 13ms/step - loss: 0.0850 - accuracy: 0.9800 - val_loss: 0.1502 - val_accuracy: 0.9717\nEpoch 4/5\n1875/1875 [==============================] - 23s 13ms/step - loss: 0.0751 - accuracy: 0.9836 - val_loss: 0.1342 - val_accuracy: 0.9767\nEpoch 5/5\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.0645 - accuracy: 0.9861 - val_loss: 0.1217 - val_accuracy: 0.9790\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e735011ae6653e827c543856c818cb30184cfc7d | 77,982 | ipynb | Jupyter Notebook | introduction.ipynb | nattiya/Transformers | 5e1d9c096c8a5203254173ab1265ebda7ef18565 | [
"MIT"
] | null | null | null | introduction.ipynb | nattiya/Transformers | 5e1d9c096c8a5203254173ab1265ebda7ef18565 | [
"MIT"
] | null | null | null | introduction.ipynb | nattiya/Transformers | 5e1d9c096c8a5203254173ab1265ebda7ef18565 | [
"MIT"
] | null | null | null | 46.807923 | 296 | 0.415404 | [
[
[
"# HuggingFace Pipeline",
"_____no_output_____"
]
],
[
[
"from transformers import pipeline\nclassifier = pipeline('sentiment-analysis')",
"_____no_output_____"
],
[
"classifier('We are very happy to show you the 🤗 Transformers library.')",
"_____no_output_____"
],
[
"results = classifier([\"We are very happy to show you the 🤗 Transformers library.\",\"We hope you don't hate it.\"])\nfor result in results:\n print(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")",
"label: POSITIVE, with score: 0.9998\nlabel: NEGATIVE, with score: 0.5309\n"
],
[
"classifier = pipeline('sentiment-analysis', model=\"nlptown/bert-base-multilingual-uncased-sentiment\")",
"_____no_output_____"
],
[
"classifier('We are very happy to show you the 🤗 Transformers library.')",
"_____no_output_____"
],
[
"results = classifier([\"We are very happy to show you the 🤗 Transformers library.\",\"We hope you don't hate it.\"])\nfor result in results:\n print(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")",
"label: 5 stars, with score: 0.7725\nlabel: 5 stars, with score: 0.2365\n"
]
],
[
[
"# Pre-trained Model",
"_____no_output_____"
]
],
[
[
"from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoModel",
"_____no_output_____"
],
[
"from transformers import AutoTokenizer, AutoModelForSequenceClassification\nmodel_name = \"distilbert-base-uncased-finetuned-sst-2-english\"\npt_model = AutoModelForSequenceClassification.from_pretrained(model_name)\ntokenizer = AutoTokenizer.from_pretrained(model_name)",
"_____no_output_____"
],
[
"classifier('We are very happy to show you the 🤗 Transformers library.')",
"_____no_output_____"
],
[
"results = classifier([\"We are very happy to show you the 🤗 Transformers library.\",\"We hope you don't hate it.\"])\nfor result in results:\n print(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")",
"label: 5 stars, with score: 0.7725\nlabel: 5 stars, with score: 0.2365\n"
]
],
[
[
"## Tokenizer",
"_____no_output_____"
]
],
[
[
"inputs = tokenizer(\"We are very happy to show you the 🤗 Transformers library.\")",
"_____no_output_____"
],
[
"print(inputs)",
"{'input_ids': [101, 2057, 2024, 2200, 3407, 2000, 2265, 2017, 1996, 100, 19081, 3075, 1012, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\n"
],
[
"pt_batch = tokenizer(\n [\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"],\n padding=True,\n truncation=True,\n return_tensors=\"pt\"\n)",
"Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.\n"
],
[
"for key, value in pt_batch.items():\n print(f\"{key}: {value.numpy().tolist()}\")",
"input_ids: [[101, 2057, 2024, 2200, 3407, 2000, 2265, 2017, 1996, 100, 19081, 3075, 1012, 102], [101, 2057, 3246, 2017, 2123, 1005, 1056, 5223, 2009, 1012, 102, 0, 0, 0]]\nattention_mask: [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]]\n"
]
],
[
[
"## Model Output",
"_____no_output_____"
]
],
[
[
"pt_outputs = pt_model(**pt_batch)",
"_____no_output_____"
],
[
"print(pt_outputs)",
"(tensor([[-4.0833, 4.3364],\n [ 0.0818, -0.0418]], grad_fn=<AddmmBackward>),)\n"
],
[
"import torch.nn.functional as F\npt_predictions = F.softmax(pt_outputs[0], dim=-1)",
"_____no_output_____"
],
[
"print(pt_predictions)",
"tensor([[2.2043e-04, 9.9978e-01],\n [5.3086e-01, 4.6914e-01]], grad_fn=<SoftmaxBackward>)\n"
],
[
"import torch\npt_outputs = pt_model(**pt_batch, labels = torch.tensor([1, 0]))",
"_____no_output_____"
],
[
"print(pt_outputs)",
"(tensor(0.3167, grad_fn=<NllLossBackward>), tensor([[-4.0833, 4.3364],\n [ 0.0818, -0.0418]], grad_fn=<AddmmBackward>))\n"
],
[
"pt_outputs = pt_model(**pt_batch, output_hidden_states=True, output_attentions=True)\nall_hidden_states, all_attentions = pt_outputs[-2:]",
"_____no_output_____"
],
[
"print(all_hidden_states)",
"(tensor([[[ 0.3549, -0.1386, -0.2253, ..., 0.1536, 0.0748, 0.1310],\n [-0.5773, 0.6791, -0.9738, ..., 0.8805, 1.1044, -0.7628],\n [-0.3451, -0.2094, 0.5709, ..., 0.3208, 0.0853, 0.4575],\n ...,\n [ 0.4431, 0.0931, -0.1034, ..., -0.7737, 0.0813, 0.0728],\n [-0.5605, 0.1081, 0.1229, ..., 0.4519, 0.2104, 0.2970],\n [-0.6116, 0.0156, -0.0555, ..., -0.1736, 0.1933, -0.0021]],\n\n [[ 0.3549, -0.1386, -0.2253, ..., 0.1536, 0.0748, 0.1310],\n [-0.5773, 0.6791, -0.9738, ..., 0.8805, 1.1044, -0.7628],\n [-0.7195, -0.0363, -0.6576, ..., 0.4434, 0.3358, -0.9249],\n ...,\n [ 0.0073, -0.5248, 0.0049, ..., 0.2801, -0.2253, 0.1293],\n [-0.0790, -0.5581, 0.2347, ..., 0.2370, -0.5104, 0.0770],\n [-0.0958, -0.5744, 0.2631, ..., 0.2453, -0.3293, 0.1269]]],\n grad_fn=<NativeLayerNormBackward>), tensor([[[ 5.0274e-02, 1.2093e-02, -1.1208e-01, ..., 6.2100e-02,\n 1.9892e-02, 3.6863e-02],\n [-4.1615e-01, 6.8497e-01, -2.2361e-01, ..., 9.1039e-01,\n 1.2211e+00, -8.4140e-01],\n [-2.9646e-01, -3.1139e-01, 8.0507e-01, ..., 1.9770e-02,\n -1.1648e-01, 2.0429e-01],\n ...,\n [ 1.1958e+00, 2.3648e-01, 2.1873e-01, ..., 3.1842e-01,\n -9.5855e-02, -4.3265e-01],\n [-2.3233e-01, 2.4509e-01, 4.6203e-01, ..., 1.8160e-01,\n 5.9420e-03, 9.2020e-02],\n [-2.4107e-01, -1.6251e-03, -2.8028e-02, ..., -3.5343e-04,\n -1.5141e-02, -5.3843e-02]],\n\n [[ 2.8430e-01, 4.6479e-02, -8.8556e-02, ..., 4.1611e-02,\n -7.4594e-02, -5.8437e-02],\n [ 2.0295e-01, 9.2777e-01, -4.6334e-01, ..., 8.2405e-01,\n 8.3170e-01, -4.7242e-01],\n [-3.5175e-01, 1.6169e-01, -4.7196e-01, ..., 3.3432e-01,\n 3.8847e-01, -1.1516e+00],\n ...,\n [ 6.2287e-04, 2.0926e-02, 4.4439e-01, ..., 9.9138e-02,\n -2.4379e-01, -1.3994e-01],\n [-7.2964e-02, -2.2786e-02, 4.2813e-01, ..., 1.1232e-01,\n -3.0760e-01, -1.2217e-01],\n [-9.5301e-02, -6.0372e-02, 4.4337e-01, ..., 9.9784e-02,\n -2.9221e-01, -1.3760e-01]]], grad_fn=<NativeLayerNormBackward>), tensor([[[ 1.0361e-01, -2.1219e-01, -2.4398e-04, ..., 1.2688e-01,\n 1.3114e-01, 2.7607e-01],\n [ 2.4999e-02, 1.5417e-01, -2.9168e-01, ..., 5.5499e-01,\n 8.8480e-01, -6.4057e-01],\n [-4.1609e-01, -4.2228e-01, 2.1983e-01, ..., 3.7131e-01,\n -3.1483e-01, 3.3536e-01],\n ...,\n [ 1.7722e+00, 3.6923e-01, 3.3390e-01, ..., 2.0879e-01,\n 1.4895e-01, -2.5811e-01],\n [-4.1081e-01, 8.0750e-02, 2.5709e-01, ..., 1.8100e-01,\n -9.1373e-02, -1.5869e-01],\n [-6.3747e-02, -5.2693e-02, 5.5198e-02, ..., -5.2895e-03,\n -1.4848e-03, 3.5660e-03]],\n\n [[ 1.9810e-01, -1.4408e-01, -4.0437e-02, ..., 4.4230e-02,\n -2.2497e-02, 6.2950e-02],\n [ 1.7996e-01, 5.2150e-01, -3.2541e-01, ..., 6.7319e-01,\n 4.6592e-01, -3.2569e-02],\n [ 1.1726e-01, 1.5158e-01, -1.0155e+00, ..., 6.9430e-01,\n -4.5949e-01, -4.9636e-01],\n ...,\n [-2.5773e-01, 5.4198e-01, 4.4803e-02, ..., 1.9991e-01,\n -6.8961e-04, -3.4591e-01],\n [-3.3679e-01, 1.0694e-01, 3.3969e-01, ..., 2.4351e-01,\n 6.3589e-02, -2.3889e-01],\n [ 1.6415e-02, 1.6959e-01, 2.6360e-01, ..., 1.9567e-01,\n 2.6422e-01, -3.0676e-01]]], grad_fn=<NativeLayerNormBackward>), tensor([[[ 0.0301, -0.0973, -0.0061, ..., -0.0848, 0.4859, 0.5157],\n [ 0.1215, 0.3077, -0.1605, ..., 0.8761, 1.0058, -0.8817],\n [-0.1056, -0.1885, 0.4484, ..., 0.1347, -0.1353, 0.1030],\n ...,\n [ 1.8863, 0.8067, 0.5119, ..., 0.0873, 0.5532, 0.0379],\n [-0.2200, 0.3982, 0.4209, ..., 0.9432, 0.0426, -0.0773],\n [-0.0255, -0.0368, 0.0297, ..., -0.0123, -0.0315, -0.0076]],\n\n [[ 0.4607, -0.2179, 0.2835, ..., 0.0227, 0.2635, 0.0273],\n [ 0.3328, -0.0494, -0.4968, ..., 0.9583, 0.4502, -0.2955],\n [ 0.5324, 0.3668, -1.0835, ..., -0.1238, 0.0216, -0.5270],\n ...,\n [ 0.0422, 0.7928, 0.5240, ..., 0.0158, 0.2665, -0.2744],\n [-0.0562, 0.5600, 0.9003, ..., 0.0896, 0.1925, -0.3044],\n [ 0.3048, 0.6908, 0.8041, ..., 0.1470, 0.2505, -0.3329]]],\n grad_fn=<NativeLayerNormBackward>), tensor([[[-0.3409, -0.1061, -0.2683, ..., -0.1652, 0.5796, 0.5002],\n [ 0.4214, 0.2444, 0.2686, ..., 0.6099, 1.3426, -1.0144],\n [ 0.1006, -0.2627, 0.4556, ..., -0.0886, 0.4303, -0.3104],\n ...,\n [ 0.8253, 0.7173, 0.7712, ..., 0.1630, 0.4335, 0.4238],\n [-0.5086, -0.1269, -0.0976, ..., 0.5660, -0.2487, 0.0164],\n [-0.0297, -0.0152, 0.0352, ..., -0.0670, -0.0422, -0.0314]],\n\n [[-0.1309, -0.2074, -0.4694, ..., -0.1594, 0.2173, 0.3738],\n [ 0.5795, 0.5547, -0.8775, ..., 1.4283, 0.3127, 0.0952],\n [ 0.8291, 0.4305, -1.0844, ..., 0.3877, -0.0820, -1.0547],\n ...,\n [ 0.5483, 1.8457, 0.7810, ..., 0.1451, 0.1964, 0.3141],\n [ 0.1724, 0.7294, 1.0131, ..., 1.0574, 0.4171, 0.2443],\n [ 0.5484, 1.2144, 1.2670, ..., 0.7768, 0.3340, 0.3799]]],\n grad_fn=<NativeLayerNormBackward>), tensor([[[-0.1953, 0.0860, 0.9997, ..., 0.0795, 0.8022, 0.0268],\n [-0.1003, 0.0457, 0.7049, ..., 0.1907, 0.9301, -0.0848],\n [-0.2361, -0.1552, 0.9777, ..., -0.2275, 0.4273, 0.2595],\n ...,\n [ 0.2493, 0.5979, 1.4683, ..., 0.8304, -0.0699, 0.1693],\n [-0.0825, 0.0384, 0.1441, ..., 0.0879, 0.0228, -0.0886],\n [ 0.0572, 0.0750, -0.0138, ..., 0.0137, 0.0714, -0.0268]],\n\n [[ 0.6188, -0.2208, -0.2991, ..., -0.3821, 0.3352, 0.0707],\n [ 0.6980, 0.1420, -0.9464, ..., 0.6355, 0.6266, 0.3075],\n [ 1.2119, -0.5208, -1.0993, ..., -0.5306, 0.2016, -1.5490],\n ...,\n [ 1.3914, 1.3196, 0.1754, ..., 0.2006, 0.5786, 0.3490],\n [ 0.2596, 1.0677, 0.2662, ..., 1.0632, 0.7110, 0.0872],\n [ 0.8003, 1.2155, 0.2833, ..., 0.5789, 0.8176, 0.1046]]],\n grad_fn=<NativeLayerNormBackward>), tensor([[[ 0.5947, 0.4446, 0.2861, ..., 0.5963, 0.8363, -0.3938],\n [ 1.1414, 0.4595, 0.3559, ..., 0.3863, 1.2749, -0.3457],\n [ 0.9597, 0.5115, 0.3641, ..., 0.3357, 0.9268, -0.3391],\n ...,\n [ 0.7021, 0.4715, 0.8777, ..., 0.7810, 0.2934, 0.0371],\n [ 1.0302, 0.2514, 0.5606, ..., 0.7747, 0.5619, -0.7242],\n [ 1.1405, 0.3427, 0.5703, ..., 0.7425, 0.5046, -0.5744]],\n\n [[ 0.1268, -0.2154, -0.0986, ..., -0.3476, 0.4724, 0.1091],\n [ 0.8312, 0.3712, -0.4354, ..., 0.0517, 0.9167, 0.0283],\n [ 0.4559, -0.0993, -0.3187, ..., -0.2703, 0.1305, -0.4617],\n ...,\n [ 0.6091, 0.1803, 0.2014, ..., -0.1248, 0.2785, 0.1924],\n [ 0.1743, 0.0365, 0.2249, ..., 0.0013, 0.4132, 0.0224],\n [ 0.3894, 0.1205, 0.1902, ..., 0.0297, 0.4249, 0.0261]]],\n grad_fn=<NativeLayerNormBackward>))\n"
],
[
"print(all_attentions)",
"(tensor([[[[6.9022e-02, 3.8968e-02, 2.4874e-02, ..., 6.2119e-02,\n 9.8898e-02, 2.0635e-01],\n [6.6224e-02, 9.9285e-02, 1.4523e-02, ..., 1.1821e-01,\n 2.1042e-02, 2.1536e-02],\n [2.4178e-01, 1.3899e-01, 2.0652e-02, ..., 2.8374e-02,\n 6.0764e-02, 1.6392e-01],\n ...,\n [1.1183e-01, 8.3808e-02, 3.2881e-02, ..., 3.5935e-02,\n 3.5129e-02, 5.0480e-02],\n [2.1915e-01, 5.3615e-02, 3.4320e-02, ..., 3.0000e-02,\n 1.1168e-01, 9.4422e-02],\n [2.2360e-01, 3.2264e-02, 4.1001e-02, ..., 2.2156e-02,\n 6.7407e-02, 1.4797e-01]],\n\n [[9.8010e-01, 7.9143e-04, 8.0962e-04, ..., 5.0954e-04,\n 1.9540e-03, 2.4879e-03],\n [1.7871e-03, 1.1840e-02, 1.6939e-02, ..., 1.1627e-01,\n 6.6234e-03, 3.6021e-02],\n [4.7489e-03, 1.9808e-02, 9.9290e-03, ..., 2.6390e-01,\n 4.4807e-03, 3.0774e-02],\n ...,\n [1.7291e-02, 2.4254e-02, 2.5427e-02, ..., 1.6302e-01,\n 1.2134e-02, 1.4572e-01],\n [2.6022e-03, 1.1500e-01, 3.2086e-01, ..., 1.7249e-02,\n 2.4019e-01, 1.0155e-02],\n [1.7293e-02, 5.5264e-02, 5.8805e-02, ..., 9.6486e-03,\n 5.3791e-02, 3.4987e-02]],\n\n [[8.4111e-01, 6.9329e-03, 1.0020e-02, ..., 6.7478e-03,\n 1.1580e-02, 7.0684e-02],\n [9.9258e-01, 1.9440e-03, 5.2326e-04, ..., 7.4994e-04,\n 2.0331e-05, 2.4515e-03],\n [1.4311e-01, 8.3800e-01, 1.3362e-02, ..., 7.9580e-06,\n 1.7253e-03, 3.0105e-04],\n ...,\n [6.8667e-01, 1.7128e-04, 7.5005e-05, ..., 3.3088e-03,\n 1.3021e-03, 6.6601e-02],\n [1.8699e-01, 8.2563e-04, 3.2996e-04, ..., 7.1641e-01,\n 2.6155e-02, 2.0377e-02],\n [7.2809e-02, 4.3320e-04, 1.3453e-04, ..., 5.8065e-04,\n 8.6665e-01, 5.8196e-02]],\n\n ...,\n\n [[5.4650e-01, 7.9617e-03, 2.4563e-02, ..., 1.3572e-02,\n 3.2254e-02, 1.4302e-01],\n [7.8429e-01, 1.3282e-02, 1.0346e-03, ..., 3.5690e-03,\n 9.8088e-03, 5.4722e-02],\n [9.1804e-01, 1.2959e-03, 1.0142e-03, ..., 3.3527e-03,\n 7.1770e-03, 2.7526e-02],\n ...,\n [4.5079e-01, 5.4237e-03, 2.3266e-02, ..., 1.8060e-01,\n 9.1818e-03, 2.1320e-01],\n [8.8647e-01, 5.1577e-03, 2.1136e-02, ..., 7.2142e-04,\n 1.2202e-03, 1.8957e-02],\n [6.3332e-01, 1.4941e-02, 4.6328e-02, ..., 5.1669e-03,\n 1.0505e-02, 2.9852e-02]],\n\n [[6.1334e-01, 2.4815e-02, 3.5596e-02, ..., 2.4639e-02,\n 4.9098e-02, 7.9004e-02],\n [1.6322e-01, 7.8833e-03, 8.0923e-01, ..., 1.4238e-04,\n 8.9188e-04, 5.9475e-03],\n [2.2871e-01, 5.0659e-03, 2.4596e-02, ..., 4.9464e-05,\n 2.0959e-03, 3.8778e-03],\n ...,\n [4.5538e-01, 1.5110e-03, 1.0154e-04, ..., 5.0141e-03,\n 5.0382e-01, 2.9285e-02],\n [5.4977e-02, 2.5476e-05, 1.0212e-03, ..., 9.2205e-04,\n 1.2836e-02, 9.2059e-01],\n [6.8564e-01, 1.3085e-03, 5.6541e-04, ..., 1.1105e-02,\n 4.8599e-02, 2.2977e-01]],\n\n [[7.1133e-01, 1.3786e-02, 8.6101e-03, ..., 1.2470e-02,\n 3.1130e-03, 1.0588e-01],\n [4.7252e-01, 1.2537e-02, 1.3263e-01, ..., 1.4976e-02,\n 1.5928e-02, 6.8173e-02],\n [1.3076e-01, 6.3047e-01, 1.6989e-02, ..., 8.3122e-03,\n 6.0395e-03, 8.0623e-03],\n ...,\n [1.2446e-01, 9.2721e-04, 7.7632e-03, ..., 1.0815e-02,\n 4.3622e-02, 8.6710e-02],\n [4.7372e-01, 6.2379e-03, 4.9318e-03, ..., 1.7645e-02,\n 3.4072e-03, 2.8679e-01],\n [3.6101e-01, 5.6692e-02, 1.6049e-02, ..., 2.1485e-02,\n 1.0954e-01, 1.7683e-01]]],\n\n\n [[[9.0521e-02, 5.1106e-02, 7.9194e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.2611e-02, 1.2385e-01, 1.4949e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.6700e-01, 1.1252e-01, 5.5518e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.7026e-01, 6.9132e-02, 1.3190e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.7370e-01, 6.8946e-02, 1.2780e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.7932e-01, 7.3312e-02, 1.2261e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[9.8603e-01, 7.9622e-04, 1.4406e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.5470e-03, 2.3499e-02, 2.7516e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.2620e-02, 3.8379e-02, 9.7579e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [9.3878e-01, 1.1547e-03, 7.8028e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.2196e-01, 1.3957e-03, 9.4457e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.4556e-01, 9.4616e-04, 7.3582e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[8.8127e-01, 7.2640e-03, 8.6560e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.8893e-01, 1.9368e-03, 1.1717e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.5964e-01, 3.3576e-01, 1.4000e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.8304e-01, 1.5062e-04, 4.2495e-05, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.6449e-01, 1.7834e-03, 3.9324e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.4569e-01, 5.2922e-03, 7.6105e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n ...,\n\n [[5.2649e-01, 7.6702e-03, 1.9355e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.7342e-01, 1.4792e-02, 1.1367e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.8440e-01, 1.3008e-03, 1.0978e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [7.3536e-01, 1.0027e-02, 4.7585e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.0202e-01, 8.3565e-03, 5.7351e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.7368e-01, 1.0368e-02, 4.6698e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[6.9213e-01, 2.8002e-02, 2.9335e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.0823e-01, 5.2275e-03, 8.6614e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.6635e-01, 7.3119e-03, 1.5281e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [9.3118e-01, 6.3162e-03, 3.0322e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.3215e-01, 6.8583e-04, 9.7713e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.7081e-01, 1.6447e-03, 3.6291e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[7.4131e-01, 1.4367e-02, 1.4636e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.4262e-01, 1.4397e-02, 1.3343e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.9909e-01, 2.3241e-01, 2.5502e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [9.3073e-01, 8.3436e-04, 9.7884e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.5173e-01, 5.8081e-04, 1.0486e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.6286e-01, 1.1375e-03, 8.4350e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]]]], grad_fn=<SoftmaxBackward>), tensor([[[[9.5606e-01, 3.6293e-03, 1.7433e-03, ..., 1.1607e-02,\n 1.2284e-03, 9.9458e-03],\n [1.2018e-06, 2.4235e-07, 1.0000e+00, ..., 4.2240e-09,\n 2.1559e-09, 4.1410e-07],\n [1.8957e-05, 1.6047e-06, 3.4839e-05, ..., 2.3854e-08,\n 9.4853e-09, 1.1110e-05],\n ...,\n [2.1049e-07, 1.8612e-08, 1.3921e-10, ..., 1.9203e-06,\n 9.9999e-01, 1.0590e-05],\n [4.5743e-03, 2.5547e-08, 2.5061e-06, ..., 1.2388e-06,\n 1.0501e-03, 9.9437e-01],\n [8.4877e-01, 3.8991e-04, 2.2583e-04, ..., 1.5583e-03,\n 2.6111e-04, 1.4745e-01]],\n\n [[4.3738e-01, 2.2968e-02, 3.5239e-02, ..., 2.8381e-02,\n 5.2887e-02, 2.5823e-01],\n [2.0680e-01, 2.1192e-02, 5.2440e-01, ..., 4.6536e-04,\n 1.4574e-02, 1.6364e-01],\n [1.7000e-01, 6.0306e-01, 2.2291e-02, ..., 1.9670e-03,\n 6.6951e-03, 1.4703e-01],\n ...,\n [1.4190e-01, 2.4327e-03, 8.1486e-03, ..., 5.4224e-02,\n 7.2854e-02, 7.2019e-02],\n [1.3107e-01, 9.2753e-04, 7.0766e-04, ..., 3.9012e-01,\n 9.0475e-02, 5.8993e-02],\n [6.4129e-01, 1.4057e-02, 1.2497e-02, ..., 2.0728e-02,\n 3.3142e-02, 2.3974e-01]],\n\n [[4.0467e-01, 6.4344e-03, 2.6555e-02, ..., 2.3441e-02,\n 2.9309e-02, 4.2918e-01],\n [4.4158e-01, 2.7384e-02, 2.3917e-02, ..., 1.3966e-03,\n 1.4137e-01, 2.1875e-01],\n [7.3528e-01, 3.2434e-03, 3.3153e-03, ..., 1.0582e-03,\n 6.9486e-02, 1.6807e-01],\n ...,\n [4.6189e-01, 5.8694e-03, 1.9609e-02, ..., 1.7757e-02,\n 9.1902e-02, 1.6227e-01],\n [2.8592e-01, 1.1211e-02, 5.0240e-02, ..., 3.3352e-02,\n 2.4676e-01, 1.9982e-01],\n [2.9177e-01, 1.5161e-02, 1.8828e-02, ..., 3.0137e-02,\n 1.6847e-01, 1.6995e-01]],\n\n ...,\n\n [[8.6757e-01, 6.3053e-03, 4.5205e-04, ..., 6.3448e-04,\n 1.4572e-02, 1.0917e-01],\n [2.0629e-07, 5.7013e-07, 1.0000e+00, ..., 3.3773e-09,\n 6.2697e-10, 2.0871e-07],\n [5.3303e-07, 8.4283e-08, 2.3977e-06, ..., 5.0606e-10,\n 1.1202e-10, 1.4432e-06],\n ...,\n [2.3754e-07, 2.4257e-08, 1.5007e-12, ..., 9.3752e-07,\n 1.0000e+00, 3.4935e-07],\n [2.5083e-04, 2.5278e-09, 9.5867e-09, ..., 7.0545e-08,\n 3.8312e-03, 9.9592e-01],\n [9.4615e-01, 1.1114e-04, 1.2392e-04, ..., 3.5467e-03,\n 6.9502e-03, 8.3672e-03]],\n\n [[5.0470e-01, 1.2478e-02, 1.3731e-02, ..., 2.4750e-02,\n 2.7475e-02, 8.5711e-02],\n [2.9823e-01, 9.2116e-02, 1.1848e-01, ..., 6.9899e-03,\n 5.4516e-02, 1.5242e-01],\n [4.5507e-01, 1.6942e-01, 3.3257e-02, ..., 5.2343e-03,\n 2.5496e-02, 3.3975e-02],\n ...,\n [1.9705e-01, 1.3956e-02, 1.7219e-02, ..., 2.4103e-01,\n 1.2358e-01, 5.2834e-02],\n [7.7166e-01, 3.6012e-02, 1.0683e-02, ..., 1.2980e-02,\n 8.0841e-03, 1.0430e-02],\n [9.1948e-01, 1.5576e-03, 3.3274e-03, ..., 2.6635e-03,\n 5.2431e-03, 2.4835e-02]],\n\n [[6.2796e-01, 7.5049e-03, 1.4712e-02, ..., 1.2177e-02,\n 1.6359e-02, 2.4760e-01],\n [1.2754e-01, 5.7040e-02, 9.8637e-02, ..., 2.3181e-02,\n 8.5888e-03, 3.9208e-02],\n [4.6587e-02, 1.3984e-02, 3.2403e-03, ..., 1.6459e-02,\n 4.3397e-03, 2.4736e-02],\n ...,\n [3.1640e-01, 4.2746e-03, 7.5832e-03, ..., 9.8658e-02,\n 1.6145e-01, 2.2653e-01],\n [5.9305e-01, 1.0407e-02, 2.5618e-03, ..., 3.1456e-02,\n 7.5050e-02, 2.5394e-01],\n [8.2368e-01, 3.1619e-03, 4.2164e-03, ..., 4.0121e-03,\n 5.3022e-03, 1.2611e-01]]],\n\n\n [[[9.5692e-01, 9.6782e-03, 1.8001e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.0229e-06, 1.0116e-06, 9.9995e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.5921e-06, 1.8390e-06, 1.9643e-07, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [8.2782e-02, 5.1377e-01, 2.5450e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.6093e-01, 5.9390e-04, 7.4724e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.8456e-01, 5.1708e-03, 2.4784e-06, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[4.2020e-01, 3.6474e-02, 2.3147e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.5352e-01, 1.7960e-02, 3.6955e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.5767e-02, 8.2591e-01, 2.9699e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.7270e-01, 4.2658e-02, 5.0665e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.0691e-01, 6.2070e-02, 6.9648e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.6809e-01, 1.0139e-01, 1.4668e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[4.5709e-01, 3.9494e-03, 4.7427e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.3693e-01, 2.8901e-02, 1.1177e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.2488e-01, 1.0741e-02, 1.5068e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.9164e-01, 3.6866e-02, 4.5974e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.2291e-01, 3.4622e-02, 4.2413e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.3546e-01, 3.3202e-02, 4.1569e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n ...,\n\n [[8.5548e-01, 7.7538e-03, 1.2091e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.4892e-07, 7.5195e-07, 1.0000e+00, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.3377e-07, 6.6803e-08, 7.2147e-08, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [6.6586e-02, 8.4229e-01, 8.7696e-05, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.3371e-02, 3.4843e-04, 7.7679e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.1306e-01, 3.0115e-03, 8.3388e-08, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[7.3969e-01, 1.2061e-02, 3.6076e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.9083e-01, 5.3290e-02, 1.0115e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.5786e-01, 6.9584e-02, 2.8493e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [4.3184e-01, 5.3765e-02, 3.7226e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.2167e-01, 5.5544e-02, 3.7377e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.2836e-01, 5.2178e-02, 3.5120e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[5.6133e-01, 1.0241e-02, 1.1434e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.4732e-01, 2.2529e-02, 1.9383e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.9765e-01, 3.2198e-02, 2.1079e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.2798e-01, 3.8838e-02, 3.8900e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.3708e-01, 3.9836e-02, 2.8053e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.9466e-01, 4.1558e-02, 2.3599e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]]]], grad_fn=<SoftmaxBackward>), tensor([[[[1.8774e-02, 1.3658e-02, 4.4589e-03, ..., 4.7408e-02,\n 3.4281e-03, 2.1307e-01],\n [1.3979e-02, 1.6624e-02, 1.0727e-02, ..., 2.6024e-02,\n 5.1907e-03, 1.6746e-01],\n [7.9253e-03, 5.0510e-02, 1.0900e-02, ..., 5.0777e-03,\n 1.9617e-03, 4.2357e-01],\n ...,\n [1.1933e-02, 7.0673e-03, 1.0148e-03, ..., 6.9341e-02,\n 7.5146e-03, 7.9192e-01],\n [1.2158e-02, 2.2997e-03, 1.5408e-03, ..., 2.5088e-03,\n 8.7164e-02, 8.5186e-01],\n [5.5550e-04, 3.8230e-06, 1.2094e-06, ..., 9.4029e-06,\n 1.5624e-03, 9.9780e-01]],\n\n [[2.0924e-02, 6.3426e-02, 3.0697e-02, ..., 2.5318e-01,\n 1.7121e-02, 1.7151e-02],\n [6.1457e-02, 2.0158e-01, 7.6478e-02, ..., 7.6220e-02,\n 8.9636e-02, 7.9970e-02],\n [4.5226e-02, 1.2710e-01, 1.5494e-01, ..., 2.9151e-02,\n 1.5793e-01, 1.8327e-01],\n ...,\n [1.2586e-01, 7.7997e-02, 5.4318e-02, ..., 7.3701e-02,\n 4.0335e-02, 1.3609e-01],\n [2.6098e-02, 6.8403e-02, 5.2906e-02, ..., 1.4867e-01,\n 1.4376e-01, 2.0731e-02],\n [1.2587e-02, 3.7322e-03, 5.2556e-03, ..., 6.1966e-03,\n 1.8507e-03, 9.1321e-01]],\n\n [[4.4176e-01, 3.2116e-02, 1.8805e-02, ..., 3.3781e-03,\n 1.6044e-01, 2.3165e-01],\n [1.1931e-01, 4.7142e-02, 3.9974e-03, ..., 2.4970e-03,\n 5.2308e-02, 5.2513e-01],\n [8.3793e-02, 1.4207e-02, 1.2424e-02, ..., 1.9402e-03,\n 2.1592e-02, 7.9838e-01],\n ...,\n [1.2196e-01, 9.6517e-03, 2.5775e-03, ..., 6.7819e-03,\n 1.5565e-02, 6.9465e-01],\n [2.9178e-02, 5.4805e-02, 3.4844e-02, ..., 2.2419e-02,\n 4.4490e-02, 3.9521e-01],\n [1.4609e-03, 1.8987e-06, 1.1245e-06, ..., 2.0323e-06,\n 2.2787e-05, 9.9850e-01]],\n\n ...,\n\n [[2.5292e-02, 7.6891e-03, 5.3091e-03, ..., 5.4447e-02,\n 1.0398e-02, 7.2964e-01],\n [9.8911e-02, 1.5312e-02, 4.0163e-02, ..., 1.6942e-02,\n 2.3380e-02, 5.9453e-01],\n [1.1533e-01, 2.8589e-02, 4.5700e-03, ..., 7.3753e-03,\n 1.9932e-02, 5.7555e-01],\n ...,\n [1.0568e-01, 3.4987e-03, 2.9569e-03, ..., 1.5989e-03,\n 6.7131e-02, 4.5630e-01],\n [1.0275e-01, 8.9534e-03, 5.3477e-03, ..., 2.8758e-02,\n 8.8714e-02, 6.3174e-01],\n [1.3518e-02, 3.7594e-04, 3.8787e-04, ..., 2.6383e-03,\n 5.2072e-03, 9.5517e-01]],\n\n [[1.7634e-02, 5.6051e-03, 2.6115e-03, ..., 1.6528e-03,\n 8.4707e-03, 9.5062e-01],\n [9.2231e-02, 6.0107e-02, 1.0075e-02, ..., 3.1116e-03,\n 1.1069e-02, 7.9882e-01],\n [9.1018e-02, 6.0113e-01, 9.7269e-03, ..., 2.9583e-03,\n 5.2205e-03, 2.5881e-01],\n ...,\n [1.6761e-02, 1.2291e-03, 1.6022e-03, ..., 7.3467e-02,\n 4.8648e-03, 7.5557e-02],\n [3.1456e-02, 3.1384e-03, 6.0236e-03, ..., 2.9973e-01,\n 5.4283e-02, 1.3052e-01],\n [1.9947e-03, 1.3974e-04, 2.5675e-04, ..., 7.1265e-05,\n 3.9493e-04, 9.9642e-01]],\n\n [[1.6746e-02, 7.0054e-03, 4.8134e-04, ..., 1.5102e-03,\n 1.6577e-03, 9.7074e-01],\n [2.7134e-02, 3.1174e-02, 2.4460e-01, ..., 5.7122e-04,\n 2.8561e-03, 6.4594e-01],\n [1.0475e-02, 9.5354e-03, 3.2529e-03, ..., 5.1865e-04,\n 8.4748e-04, 1.4915e-01],\n ...,\n [3.3044e-02, 1.9563e-05, 5.0333e-06, ..., 8.6250e-03,\n 1.4739e-01, 8.0970e-01],\n [9.9560e-03, 7.5783e-03, 3.9445e-04, ..., 3.6876e-03,\n 8.0958e-03, 9.6901e-01],\n [3.3147e-03, 1.0860e-03, 7.4764e-04, ..., 3.2183e-04,\n 8.0692e-04, 9.9206e-01]]],\n\n\n [[[2.8339e-02, 4.5804e-02, 3.4485e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.2621e-02, 1.1328e-01, 4.5571e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.2333e-02, 2.1534e-02, 1.3704e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.2948e-02, 3.2171e-02, 1.4117e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.2411e-02, 1.0200e-02, 5.4287e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.5604e-02, 1.4040e-02, 6.6132e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.8977e-02, 2.1071e-01, 2.3637e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.1499e-02, 2.1716e-01, 2.4174e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.9569e-02, 7.3268e-02, 1.0682e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.0788e-01, 2.2282e-01, 1.1385e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1243e-01, 1.6227e-01, 9.6793e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.2632e-02, 2.2113e-01, 1.1781e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.9954e-01, 1.3147e-02, 2.0863e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1236e-01, 5.3990e-02, 2.0724e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1560e-01, 3.5603e-02, 5.8809e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.1384e-01, 3.3079e-02, 1.0442e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.4992e-01, 1.7758e-02, 3.2303e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.7358e-01, 1.3797e-02, 3.4122e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n ...,\n\n [[4.2854e-02, 7.1209e-03, 1.0927e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.5930e-02, 1.3195e-03, 5.1272e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.6991e-02, 1.0044e-03, 9.8907e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.0328e-01, 1.3188e-02, 2.7151e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.7103e-02, 1.0881e-02, 1.4488e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.1949e-02, 8.2928e-03, 1.4647e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.1679e-02, 6.2038e-03, 6.0154e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.3432e-02, 3.8892e-02, 1.1777e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.3730e-01, 2.2077e-01, 5.8259e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [5.7882e-02, 7.2919e-03, 8.9824e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.7081e-02, 4.9508e-03, 3.4313e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.5307e-02, 1.0937e-02, 1.4491e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.4224e-02, 1.1133e-02, 1.2735e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.3823e-02, 5.0204e-03, 7.4440e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.3611e-02, 1.1960e-02, 3.3760e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.4052e-02, 4.1232e-01, 1.8248e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.8495e-02, 1.8540e-02, 2.3072e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.4386e-02, 1.3074e-02, 4.1019e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]]]], grad_fn=<SoftmaxBackward>), tensor([[[[6.1406e-04, 1.2693e-03, 4.7267e-03, ..., 4.5354e-03,\n 2.2960e-02, 9.3051e-01],\n [1.9018e-02, 1.8609e-02, 1.3526e-02, ..., 2.9433e-03,\n 2.5381e-02, 8.7604e-01],\n [3.7469e-02, 9.5124e-02, 2.6027e-02, ..., 9.2239e-03,\n 4.0330e-02, 6.5167e-01],\n ...,\n [2.4672e-02, 5.8908e-03, 1.0019e-02, ..., 4.2944e-02,\n 9.8467e-03, 1.3271e-01],\n [2.7524e-03, 2.4642e-03, 1.5068e-02, ..., 4.1603e-02,\n 2.5831e-02, 3.6032e-01],\n [5.6754e-04, 7.4545e-04, 6.8452e-04, ..., 1.0787e-03,\n 1.3008e-02, 9.7781e-01]],\n\n [[1.0785e-02, 3.6259e-03, 2.3583e-03, ..., 5.0128e-02,\n 6.6608e-03, 8.2882e-01],\n [3.5291e-03, 5.1916e-03, 4.7315e-02, ..., 5.2014e-03,\n 5.6806e-04, 1.1079e-01],\n [2.7011e-03, 3.8884e-03, 9.4391e-03, ..., 1.0155e-02,\n 1.2599e-03, 4.8461e-02],\n ...,\n [1.3544e-02, 1.0657e-03, 1.3382e-03, ..., 3.6113e-02,\n 3.5145e-02, 8.6277e-01],\n [1.6438e-02, 4.2867e-02, 3.4942e-02, ..., 3.9013e-02,\n 1.5420e-02, 7.9886e-01],\n [3.8943e-03, 3.4099e-04, 1.6483e-04, ..., 2.7692e-03,\n 2.3678e-03, 9.8306e-01]],\n\n [[2.6466e-04, 1.0954e-02, 6.1866e-04, ..., 1.5009e-03,\n 1.6518e-03, 9.8174e-01],\n [3.6937e-03, 6.0723e-02, 4.5184e-01, ..., 4.5139e-04,\n 6.1796e-04, 1.9624e-01],\n [1.7313e-02, 3.1698e-02, 5.1115e-02, ..., 4.8720e-04,\n 1.0522e-03, 2.5367e-01],\n ...,\n [7.3922e-03, 2.4625e-04, 1.7911e-04, ..., 1.0827e-02,\n 4.2466e-01, 5.4948e-01],\n [5.1352e-04, 4.4458e-02, 1.1848e-03, ..., 4.3169e-03,\n 2.4434e-02, 9.2097e-01],\n [6.2359e-03, 4.4333e-04, 3.2996e-04, ..., 1.2100e-03,\n 3.4993e-03, 9.8261e-01]],\n\n ...,\n\n [[3.0173e-03, 1.9708e-03, 6.1727e-04, ..., 2.3716e-02,\n 2.7339e-02, 9.1449e-01],\n [7.2371e-03, 1.4321e-02, 1.3109e-03, ..., 5.0760e-04,\n 1.3910e-02, 9.4909e-01],\n [2.1733e-02, 8.9030e-02, 3.7737e-03, ..., 4.0170e-04,\n 6.0680e-02, 7.9754e-01],\n ...,\n [4.9625e-03, 1.4187e-03, 2.2352e-03, ..., 2.2446e-02,\n 1.1592e-02, 1.6703e-01],\n [1.1550e-02, 1.3063e-04, 3.2548e-04, ..., 4.0857e-02,\n 7.1629e-02, 5.4848e-01],\n [3.5959e-03, 9.3966e-04, 3.4053e-04, ..., 1.6904e-03,\n 1.0168e-02, 9.7250e-01]],\n\n [[1.6893e-03, 9.0600e-05, 3.8076e-05, ..., 1.5110e-02,\n 3.5623e-04, 9.5605e-01],\n [2.5061e-03, 2.1997e-03, 1.1519e-03, ..., 3.1057e-03,\n 6.0919e-05, 6.8009e-02],\n [1.8108e-03, 7.8007e-03, 1.7756e-03, ..., 2.8890e-02,\n 1.3290e-04, 8.6258e-02],\n ...,\n [7.1267e-03, 3.1406e-04, 9.3279e-05, ..., 1.0863e-01,\n 1.9058e-02, 7.1311e-01],\n [7.3368e-03, 9.4325e-03, 3.6600e-03, ..., 1.1883e-01,\n 1.6861e-02, 7.4417e-01],\n [2.3276e-03, 7.8446e-05, 2.7403e-05, ..., 1.6291e-03,\n 4.4539e-04, 9.9203e-01]],\n\n [[1.6472e-02, 1.9126e-03, 1.9055e-04, ..., 1.0718e-03,\n 3.3663e-01, 6.3805e-01],\n [1.6991e-02, 2.3537e-02, 1.6899e-03, ..., 4.8761e-04,\n 2.3290e-03, 9.4795e-01],\n [8.7763e-03, 7.4311e-01, 2.1626e-02, ..., 4.8692e-04,\n 7.6758e-04, 2.1226e-01],\n ...,\n [6.2178e-04, 8.7081e-05, 2.7175e-05, ..., 1.2379e-02,\n 2.3831e-03, 7.7391e-01],\n [4.4013e-02, 4.3673e-04, 9.1447e-05, ..., 1.0288e-01,\n 1.7567e-01, 5.8819e-01],\n [3.6936e-03, 4.3788e-04, 1.0299e-04, ..., 7.0434e-04,\n 1.1178e-02, 9.8173e-01]]],\n\n\n [[[6.5664e-04, 2.5103e-03, 6.1991e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.5609e-03, 9.4192e-03, 1.3038e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.7923e-02, 1.6794e-01, 3.2647e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.5618e-03, 9.1222e-03, 1.7023e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.2775e-03, 4.1756e-03, 6.0544e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.4960e-03, 4.1260e-03, 1.8056e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[1.3146e-02, 7.9248e-03, 1.2367e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.0300e-03, 2.9158e-03, 8.8352e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1760e-03, 3.7752e-04, 1.0543e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [5.5784e-02, 1.7855e-02, 2.6142e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.4913e-02, 8.3004e-03, 3.0908e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.5020e-02, 1.0300e-02, 2.8057e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[2.4706e-04, 1.1575e-02, 4.0781e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.1735e-04, 8.2531e-03, 8.9791e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1463e-03, 7.5981e-03, 1.3394e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [9.4392e-04, 4.9755e-02, 8.4758e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.3985e-04, 1.1507e-02, 5.9671e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.9000e-04, 4.2290e-03, 5.0009e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n ...,\n\n [[1.5334e-03, 2.0350e-04, 3.3869e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.5052e-03, 2.7802e-03, 7.2434e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.4036e-02, 1.1942e-01, 3.0462e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [5.0961e-03, 6.3008e-04, 3.8288e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.3228e-03, 2.2848e-04, 6.4537e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.3621e-03, 4.2567e-04, 1.9403e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.7106e-04, 9.4036e-05, 1.9058e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.0273e-03, 2.7796e-03, 3.5734e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.1949e-04, 8.6822e-04, 2.6476e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [5.8556e-03, 9.5036e-04, 1.5942e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.6014e-03, 3.2914e-04, 1.9362e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.2303e-03, 3.4540e-04, 4.0069e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[1.9949e-02, 1.5043e-03, 1.9841e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1654e-02, 2.2482e-02, 5.7646e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.5414e-03, 6.5443e-01, 1.0272e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [2.0533e-02, 1.1685e-03, 3.1539e-05, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.2848e-02, 4.4189e-04, 3.4866e-05, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.8621e-02, 3.1253e-03, 1.1193e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]]]], grad_fn=<SoftmaxBackward>), tensor([[[[5.7523e-04, 4.1261e-04, 1.3907e-03, ..., 5.5290e-02,\n 9.7784e-04, 5.5130e-01],\n [7.9430e-04, 1.0984e-01, 1.4132e-02, ..., 7.8558e-03,\n 2.0904e-03, 6.1161e-01],\n [5.1360e-04, 1.6382e-02, 3.3216e-02, ..., 3.1301e-03,\n 2.3273e-03, 4.1320e-01],\n ...,\n [1.5978e-03, 2.0287e-05, 2.1037e-05, ..., 6.1113e-01,\n 3.7656e-03, 2.9144e-01],\n [2.8347e-04, 1.9458e-04, 3.0501e-04, ..., 3.1003e-02,\n 4.3788e-04, 8.9547e-01],\n [3.9118e-05, 6.9457e-05, 1.1210e-04, ..., 2.3840e-03,\n 2.1043e-05, 9.8278e-01]],\n\n [[4.0548e-03, 6.3272e-03, 1.8587e-03, ..., 8.9259e-03,\n 3.2768e-03, 6.4314e-01],\n [2.6236e-02, 5.5076e-03, 4.1498e-03, ..., 3.3851e-03,\n 6.5858e-03, 8.5697e-01],\n [4.8474e-02, 2.5675e-02, 8.8406e-03, ..., 1.0316e-02,\n 8.8974e-03, 7.2154e-01],\n ...,\n [1.2797e-02, 1.1184e-03, 2.9462e-04, ..., 3.1165e-02,\n 1.2058e-02, 2.6509e-01],\n [1.5917e-01, 5.6522e-03, 4.2686e-03, ..., 4.9636e-02,\n 4.0451e-02, 4.1827e-01],\n [3.3542e-03, 1.3500e-03, 5.2400e-04, ..., 2.0147e-03,\n 1.4736e-03, 9.2953e-01]],\n\n [[7.1034e-05, 9.2290e-04, 7.6170e-03, ..., 8.6107e-04,\n 4.2006e-05, 6.0594e-03],\n [3.9177e-02, 1.3921e-02, 4.2011e-02, ..., 4.3926e-02,\n 2.1203e-02, 1.6101e-01],\n [4.0577e-02, 4.8629e-02, 5.2725e-02, ..., 1.5396e-02,\n 2.2058e-02, 3.4529e-01],\n ...,\n [2.8869e-02, 2.9298e-03, 5.1115e-03, ..., 4.4172e-02,\n 5.9002e-03, 2.8288e-01],\n [7.1207e-03, 3.1996e-03, 3.6155e-03, ..., 4.5339e-02,\n 3.0515e-03, 9.5834e-02],\n [1.0925e-02, 3.4429e-03, 6.3379e-03, ..., 3.8551e-03,\n 2.1697e-03, 8.1880e-01]],\n\n ...,\n\n [[5.0869e-03, 1.6317e-03, 1.3127e-03, ..., 7.3848e-03,\n 6.1976e-03, 9.3625e-01],\n [5.6603e-03, 1.3112e-02, 1.0317e-02, ..., 2.0743e-02,\n 1.1656e-02, 7.8637e-01],\n [2.2010e-03, 1.8016e-02, 6.7977e-03, ..., 4.8662e-03,\n 3.8948e-03, 9.2426e-01],\n ...,\n [2.8060e-03, 1.7042e-03, 6.5572e-04, ..., 1.7426e-02,\n 4.2789e-03, 9.0903e-01],\n [3.9615e-03, 9.2195e-04, 4.5556e-04, ..., 4.9667e-02,\n 8.5842e-03, 9.1787e-01],\n [4.3591e-03, 9.0503e-03, 5.2208e-03, ..., 9.0078e-03,\n 1.2149e-02, 9.0310e-01]],\n\n [[6.0411e-04, 3.0155e-03, 1.3760e-02, ..., 4.8797e-01,\n 7.3541e-04, 1.3391e-02],\n [1.3644e-02, 2.2123e-02, 4.7165e-02, ..., 1.7755e-02,\n 3.8669e-03, 1.8757e-01],\n [1.6338e-02, 3.2475e-02, 2.5467e-02, ..., 2.9382e-02,\n 3.4408e-03, 1.0853e-01],\n ...,\n [7.7203e-02, 9.0576e-03, 3.8944e-03, ..., 1.2903e-01,\n 3.6322e-02, 4.9405e-01],\n [1.1722e-01, 4.2028e-02, 3.3931e-02, ..., 8.2918e-02,\n 5.3474e-03, 3.6270e-01],\n [1.2085e-02, 1.7597e-02, 2.0621e-02, ..., 8.9073e-02,\n 1.2791e-02, 5.5446e-01]],\n\n [[2.7891e-02, 4.2133e-02, 4.2254e-02, ..., 1.6013e-01,\n 1.1618e-02, 4.7938e-01],\n [4.4761e-02, 5.0443e-03, 3.5056e-03, ..., 1.0448e-01,\n 5.4185e-02, 3.7011e-01],\n [4.6588e-02, 4.7598e-03, 1.8399e-03, ..., 7.1942e-02,\n 4.6411e-02, 6.3814e-01],\n ...,\n [1.3797e-02, 9.2919e-03, 8.3662e-03, ..., 3.6903e-03,\n 1.7783e-02, 5.9827e-01],\n [1.7512e-02, 4.3154e-02, 1.8524e-02, ..., 2.8983e-02,\n 4.2992e-02, 6.7074e-01],\n [1.1412e-01, 4.0274e-02, 4.9218e-02, ..., 7.6118e-02,\n 1.1227e-01, 2.7261e-01]]],\n\n\n [[[5.0815e-04, 3.0430e-03, 2.3988e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.1665e-04, 3.8255e-02, 1.3640e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.2101e-04, 4.1918e-04, 1.5873e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.8220e-03, 2.0595e-03, 1.2471e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.2208e-03, 3.7877e-03, 5.3804e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.1031e-03, 1.9287e-03, 9.9022e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[4.9772e-04, 2.5655e-03, 6.6877e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.6013e-03, 1.3832e-02, 6.5558e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.5913e-03, 2.6739e-02, 1.1549e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.5109e-03, 3.4310e-02, 1.4936e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.2230e-03, 1.0860e-02, 1.0324e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.2668e-03, 2.3158e-02, 7.8156e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.1288e-05, 3.9157e-03, 5.7112e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.6755e-02, 5.5406e-02, 3.0367e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.6942e-02, 2.0965e-01, 3.9564e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.0767e-02, 3.0328e-02, 4.1622e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.7561e-03, 3.7786e-02, 2.5183e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.8750e-03, 3.0606e-02, 3.6073e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n ...,\n\n [[7.2218e-03, 8.9487e-04, 9.1435e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.0612e-03, 3.5869e-02, 2.8409e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.4590e-04, 6.9936e-03, 3.8850e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.0148e-02, 2.5882e-04, 1.0601e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.9562e-03, 4.3642e-04, 8.2853e-04, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [7.4997e-03, 2.4519e-04, 2.0484e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[7.7496e-04, 2.7495e-03, 1.3758e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1776e-03, 3.3862e-02, 2.9209e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.2623e-05, 7.2133e-04, 2.9629e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [2.8982e-03, 1.0071e-02, 5.2825e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.4967e-03, 9.1071e-03, 6.5340e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.9410e-03, 1.1545e-02, 9.5851e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[8.5201e-03, 1.5500e-02, 2.6277e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.7805e-02, 1.1838e-02, 1.7160e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.1013e-03, 1.2065e-02, 8.5938e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.9746e-03, 1.7650e-02, 4.6551e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.2728e-03, 9.0967e-03, 2.7329e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.1261e-03, 1.2390e-02, 5.1664e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]]]], grad_fn=<SoftmaxBackward>), tensor([[[[2.5301e-02, 1.3717e-02, 1.8135e-02, ..., 3.3540e-02,\n 1.8760e-01, 3.4090e-01],\n [1.2118e-02, 5.4642e-03, 5.6462e-03, ..., 1.8224e-03,\n 3.0792e-01, 5.8291e-01],\n [1.0965e-02, 3.8841e-03, 3.8444e-03, ..., 2.2385e-03,\n 3.1037e-01, 5.9935e-01],\n ...,\n [2.6116e-02, 2.4170e-02, 2.6693e-02, ..., 7.4213e-02,\n 2.2306e-01, 3.5797e-01],\n [8.7393e-03, 1.4876e-02, 1.2666e-02, ..., 6.7235e-03,\n 3.3391e-01, 5.4125e-01],\n [9.3774e-03, 1.7442e-02, 1.4381e-02, ..., 6.7148e-03,\n 3.3847e-01, 5.2260e-01]],\n\n [[3.5020e-02, 4.7553e-02, 3.9845e-02, ..., 7.3195e-02,\n 2.3197e-01, 2.7684e-01],\n [5.9553e-03, 9.9386e-03, 6.5469e-03, ..., 4.4889e-03,\n 3.2509e-01, 5.8410e-01],\n [1.1257e-02, 1.6278e-02, 8.2631e-03, ..., 6.2564e-03,\n 3.0131e-01, 5.7834e-01],\n ...,\n [3.1924e-02, 2.0730e-02, 3.1166e-02, ..., 1.4534e-01,\n 1.6990e-01, 2.1237e-01],\n [5.5991e-03, 1.1585e-02, 7.8498e-03, ..., 7.1748e-03,\n 3.4742e-01, 5.3087e-01],\n [4.3639e-03, 1.0318e-02, 7.5404e-03, ..., 6.1558e-03,\n 3.5758e-01, 5.2681e-01]],\n\n [[4.2414e-02, 1.2069e-01, 6.9184e-02, ..., 3.4609e-02,\n 1.8839e-01, 1.9583e-01],\n [1.0054e-01, 1.7126e-01, 1.0998e-01, ..., 3.6853e-03,\n 3.0349e-02, 2.8756e-02],\n [9.2496e-02, 1.8196e-01, 1.1843e-01, ..., 3.5683e-03,\n 4.6698e-02, 4.7380e-02],\n ...,\n [1.4521e-02, 1.4920e-02, 7.9075e-03, ..., 1.7092e-01,\n 2.2513e-01, 3.1011e-01],\n [1.3710e-02, 2.6447e-02, 1.9435e-02, ..., 1.7351e-02,\n 3.1377e-01, 4.7473e-01],\n [1.3010e-02, 2.2182e-02, 1.6453e-02, ..., 2.0989e-02,\n 3.1210e-01, 4.7903e-01]],\n\n ...,\n\n [[2.6527e-02, 1.0240e-01, 1.4259e-01, ..., 1.3898e-02,\n 4.5420e-02, 5.4026e-02],\n [2.3264e-02, 1.0580e-01, 1.6537e-01, ..., 8.8380e-03,\n 6.7713e-02, 8.2168e-02],\n [3.0000e-02, 1.0536e-01, 2.0862e-01, ..., 4.4036e-03,\n 4.6326e-02, 5.6570e-02],\n ...,\n [3.5106e-02, 6.2051e-02, 1.3844e-01, ..., 1.4209e-02,\n 1.2826e-01, 1.6522e-01],\n [2.3221e-02, 6.5063e-02, 5.3631e-02, ..., 8.4271e-03,\n 2.4100e-01, 3.5268e-01],\n [2.8598e-02, 6.7848e-02, 5.3478e-02, ..., 1.3309e-02,\n 2.3435e-01, 3.3284e-01]],\n\n [[8.8017e-02, 2.2771e-02, 4.4184e-02, ..., 4.1138e-02,\n 2.9558e-02, 2.2840e-02],\n [3.0823e-02, 4.1122e-02, 3.4021e-02, ..., 1.1814e-02,\n 2.9587e-01, 3.4720e-01],\n [4.0661e-02, 6.4344e-02, 6.0280e-02, ..., 1.0025e-02,\n 2.0608e-01, 2.2699e-01],\n ...,\n [4.2844e-02, 1.3742e-02, 1.3851e-02, ..., 2.8202e-01,\n 9.3834e-02, 9.0069e-02],\n [2.8853e-03, 2.5331e-03, 2.8880e-03, ..., 8.1683e-03,\n 3.4812e-01, 5.9948e-01],\n [2.3066e-03, 2.1697e-03, 2.5410e-03, ..., 7.6160e-03,\n 3.3930e-01, 6.1263e-01]],\n\n [[5.9777e-03, 3.2587e-03, 2.5355e-03, ..., 1.9257e-01,\n 2.5608e-01, 3.6143e-01],\n [1.4853e-02, 1.8768e-02, 9.0891e-03, ..., 2.0916e-02,\n 3.2022e-01, 5.1921e-01],\n [6.4198e-03, 8.2265e-03, 4.5036e-03, ..., 2.3387e-02,\n 3.3233e-01, 5.5689e-01],\n ...,\n [6.7403e-03, 3.0823e-03, 3.5945e-03, ..., 1.1689e-01,\n 2.9003e-01, 3.7523e-01],\n [3.8736e-03, 2.8687e-03, 2.5992e-03, ..., 1.9648e-02,\n 3.6345e-01, 5.6051e-01],\n [4.4655e-03, 3.4056e-03, 3.2893e-03, ..., 1.6356e-02,\n 3.5777e-01, 5.6388e-01]]],\n\n\n [[[3.0482e-03, 3.5141e-02, 6.2579e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.3218e-02, 3.3484e-02, 2.2215e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.0110e-02, 1.4780e-02, 1.7390e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.1944e-03, 1.2134e-02, 5.0307e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.9242e-03, 1.8637e-02, 4.3269e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.7947e-03, 1.3213e-02, 6.0232e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[1.8419e-02, 9.7044e-02, 2.2712e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1194e-02, 1.9618e-02, 2.0546e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.4262e-02, 6.3505e-02, 2.9691e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.0381e-02, 1.1006e-01, 3.8001e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1680e-02, 1.6718e-01, 1.1600e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [9.9683e-03, 7.1877e-02, 1.8992e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[1.8107e-02, 7.8819e-03, 2.6970e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.0463e-02, 1.3951e-01, 5.3356e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [6.6272e-03, 2.6903e-02, 2.0533e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.3634e-03, 2.8031e-03, 3.2353e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.4839e-03, 1.9036e-03, 3.0057e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.1695e-03, 3.8715e-03, 3.8573e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n ...,\n\n [[8.0872e-04, 7.1387e-03, 1.8315e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [5.6838e-03, 4.9244e-02, 1.7064e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [8.0459e-03, 2.0903e-02, 4.0686e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [3.7205e-04, 4.6217e-03, 8.3587e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.0675e-04, 9.2820e-04, 1.3306e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.4402e-04, 1.7430e-03, 2.8454e-03, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.2083e-03, 7.5355e-04, 9.7155e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.9534e-02, 1.5484e-02, 1.2003e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [2.5380e-03, 3.1669e-03, 1.1014e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [1.1383e-03, 4.2908e-04, 4.5432e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [4.4144e-03, 8.9578e-04, 6.3132e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.2508e-03, 4.4838e-04, 6.7781e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]],\n\n [[3.4473e-02, 3.1367e-02, 2.3187e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [3.5311e-02, 3.3985e-02, 5.4093e-02, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.0943e-02, 3.1733e-03, 2.9226e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n ...,\n [9.0263e-03, 3.5084e-03, 2.9081e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.5904e-02, 8.0937e-03, 2.1290e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00],\n [1.3486e-02, 6.0674e-03, 2.1980e-01, ..., 0.0000e+00,\n 0.0000e+00, 0.0000e+00]]]], grad_fn=<SoftmaxBackward>))\n"
]
],
[
[
"## Save/load Model",
"_____no_output_____"
]
],
[
[
"tokenizer.save_pretrained(\".\")\npt_model.save_pretrained(\".\")",
"_____no_output_____"
],
[
"tokenizer = AutoTokenizer.from_pretrained(\".\")\npt_model = AutoModel.from_pretrained(\".\")",
"_____no_output_____"
]
],
[
[
"## Customizing Model",
"_____no_output_____"
],
[
"To change the hidden size, we can't use a pretrained model anymore and have to train from scratch by instantiating the model from a custom configuration.",
"_____no_output_____"
]
],
[
[
"from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification\nconfig = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\nmodel = DistilBertForSequenceClassification(config)",
"_____no_output_____"
]
],
[
[
"To change only the head of the model (for instance, the number of labels), we can still use a pretrained model for the body. For instance, let's define a classifier for 10 different labels using a pretrained body.",
"_____no_output_____"
]
],
[
[
"from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification\nmodel_name = \"distilbert-base-uncased\"\nmodel = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=10)\ntokenizer = DistilBertTokenizer.from_pretrained(model_name)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7350232c2df268fd1bfdb9716970a9a43a84d7c | 10,004 | ipynb | Jupyter Notebook | 2019/09-Redes_Neurais_e_Aprendizado_Profundo/SentimentAnalysisRNN.ipynb | InsightLab/imersao-ciencia-de-dados-2019 | 80f0937144c6cb9507d6182285d5a3d17a4bc14c | [
"MIT"
] | 32 | 2019-07-09T21:43:15.000Z | 2022-03-30T20:38:43.000Z | 2019/09-Redes_Neurais_e_Aprendizado_Profundo/SentimentAnalysisRNN.ipynb | InsightLab/imersao-ciencia-de-dados-2019 | 80f0937144c6cb9507d6182285d5a3d17a4bc14c | [
"MIT"
] | null | null | null | 2019/09-Redes_Neurais_e_Aprendizado_Profundo/SentimentAnalysisRNN.ipynb | InsightLab/imersao-ciencia-de-dados-2019 | 80f0937144c6cb9507d6182285d5a3d17a4bc14c | [
"MIT"
] | 14 | 2019-07-15T17:15:26.000Z | 2022-03-30T01:50:44.000Z | 19.770751 | 101 | 0.529188 | [
[
[
"## Classificação de Revisões do IMDb com Keras",
"_____no_output_____"
]
],
[
[
"from keras.datasets import imdb\nfrom keras import preprocessing\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"### Leitura dos dados",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('movie_data.csv.gz', encoding='utf-8')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"samples = df[\"review\"].values",
"_____no_output_____"
],
[
"dimensionality = 1000 #dimensão do vetor quer vai representar a palavra",
"_____no_output_____"
]
],
[
[
"### Constrói o índice de palavras",
"_____no_output_____"
]
],
[
[
"from keras.preprocessing.text import Tokenizer",
"_____no_output_____"
],
[
"tokenizer = Tokenizer(num_words=1000) \ntokenizer.fit_on_texts(samples) #constroi o índice de palavras",
"_____no_output_____"
],
[
"word_index = tokenizer.word_index\nprint('Foram encontrados %s tokens.' % len(word_index))",
"_____no_output_____"
]
],
[
[
"### Transforma strings em lista de índices inteiros",
"_____no_output_____"
]
],
[
[
"sequences = tokenizer.texts_to_sequences(samples) #transforma o texto em sequencias de índices ",
"_____no_output_____"
],
[
"sequences[0][:10] #os 10 primeiros índices da frase 0",
"_____no_output_____"
]
],
[
[
"### Pre-processa sequencias para padronizar o tamanho",
"_____no_output_____"
]
],
[
[
"maxlen = 200\nsequences_padding = preprocessing.sequence.pad_sequences(sequences, maxlen=maxlen)",
"_____no_output_____"
],
[
"len(sequences[10])",
"_____no_output_____"
],
[
"len(sequences_padding[10])",
"_____no_output_____"
]
],
[
[
"### Usando a camada Embedding e classificando os dados do IMDB",
"_____no_output_____"
],
[
"### SimpleRNN",
"_____no_output_____"
],
[
"#### Construindo o modelo",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers import SimpleRNN, Embedding, Dense, Input",
"_____no_output_____"
],
[
"original_dim = 10000 #numero de palavra para considerar como feature\nnew_dim = 32",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Embedding(input_dim=dimensionality,input_length=maxlen,output_dim=new_dim))\nmodel.add(SimpleRNN(new_dim, input_shape=(new_dim)))\nmodel.add(Dense(1,activation='sigmoid'))",
"_____no_output_____"
],
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"#### Compilando o modelo",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])",
"_____no_output_____"
]
],
[
[
"#### Dividindo os dados em treino e teste",
"_____no_output_____"
]
],
[
[
"import random ",
"_____no_output_____"
],
[
"size = len(sequences_padding)\nindices = np.arange(size)\nrandom.shuffle(indices)",
"_____no_output_____"
],
[
"indices",
"_____no_output_____"
],
[
"x = sequences_padding[indices]\ny = df.sentiment.values[indices]",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"treino = 0.8\n\nx_treino = x[:int(treino*size),:]\ny_treino = y[:int(treino*size)]\nx_teste = x[int(treino*size):]\ny_teste = y[int(treino*size):]",
"_____no_output_____"
],
[
"y_teste.shape",
"_____no_output_____"
]
],
[
[
"#### Treinando o modelo",
"_____no_output_____"
]
],
[
[
"history = model.fit(x_treino, y_treino, epochs=10, batch_size=256, validation_split=0.2)",
"_____no_output_____"
]
],
[
[
"#### Avaliando o modelo",
"_____no_output_____"
]
],
[
[
"evaluation = model.evaluate(x_teste,y_teste)",
"_____no_output_____"
]
],
[
[
"#### Visualizando resultados",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# summarize history for accuracy\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['treino', 'validação'], loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"evaluation",
"_____no_output_____"
]
],
[
[
"### Modelo LSTM",
"_____no_output_____"
]
],
[
[
"from keras.layers import LSTM, Dense, Masking, Embedding\n\nmodel = Sequential()\n\n# Embedding layer\nmodel.add(Embedding(input_dim=dimensionality,input_length=maxlen,output_dim=new_dim))\n\n# Recurrent layer\nmodel.add(LSTM(new_dim, return_sequences=False, dropout=0.1, recurrent_dropout=0.1))\n\n# Fully connected layer\nmodel.add(Dense(new_dim, activation='relu')) \n\n\n# Output layer\nmodel.add(Dense(1, activation='sigmoid'))\n",
"_____no_output_____"
],
[
"model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])",
"_____no_output_____"
],
[
"history = model.fit(x_treino, y_treino, epochs=10, batch_size=256, validation_split=0.2)",
"_____no_output_____"
],
[
"evaluation = model.evaluate(x_teste,y_teste)",
"_____no_output_____"
],
[
"# summarize history for accuracy\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\n\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['treino', 'validação'], loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"evaluation",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7350a5f084d5782417db2d09815d55dc199fcd8 | 133,363 | ipynb | Jupyter Notebook | Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb | anthonyng2/FX-Trading-with-Python-and-Oanda | a898ae443c942e64a08e1d79d5972ca0d22fd166 | [
"MIT"
] | 60 | 2017-02-27T16:07:35.000Z | 2021-09-19T14:12:35.000Z | Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb | TianyiDataAnalyst/FX-Trading-with-Python-and-Oanda | a898ae443c942e64a08e1d79d5972ca0d22fd166 | [
"MIT"
] | null | null | null | Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb | TianyiDataAnalyst/FX-Trading-with-Python-and-Oanda | a898ae443c942e64a08e1d79d5972ca0d22fd166 | [
"MIT"
] | 55 | 2017-04-05T19:39:15.000Z | 2022-03-28T05:36:35.000Z | 40.388552 | 3,810 | 0.391068 | [
[
[
"<!--NAVIGATION-->\n< [Rates Information](02.00 Rates Information.ipynb) | [Contents](Index.ipynb) | [Order Management](04.00 Order Management.ipynb) >",
"_____no_output_____"
],
[
"# Account Information",
"_____no_output_____"
],
[
"[OANDA REST-V20 API Wrapper Doc on Account](http://oanda-api-v20.readthedocs.io/en/latest/endpoints/accounts.html)\n\n[OANDA API Getting Started](http://developer.oanda.com/rest-live-v20/introduction/)\n\n[OANDA API Account](http://developer.oanda.com/rest-live-v20/account-ep/)",
"_____no_output_____"
],
[
"## Account Details",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport oandapyV20\nimport oandapyV20.endpoints.accounts as accounts\nimport configparser",
"_____no_output_____"
],
[
"config = configparser.ConfigParser()\nconfig.read('../config/config_v20.ini')\naccountID = config['oanda']['account_id']\naccess_token = config['oanda']['api_key']",
"_____no_output_____"
],
[
"client = oandapyV20.API(access_token=access_token)\nr = accounts.AccountDetails(accountID)",
"_____no_output_____"
],
[
"client.request(r)",
"_____no_output_____"
],
[
"print(r.response)",
"{'lastTransactionID': '57', 'account': {'openTradeCount': 3, 'createdTime': '2017-01-20T14:23:22.308266448Z', 'currency': 'SGD', 'openPositionCount': 2, 'hedgingEnabled': False, 'marginCloseoutNAV': '100001.9288', 'marginAvailable': '99993.0859', 'marginRate': '0.02', 'marginCallMarginUsed': '8.5639', 'positionValue': '320.6001', 'marginCallPercent': '0.00009', 'pendingOrderCount': 3, 'balance': '100000.3026', 'orders': [{'instrument': 'EUR_USD', 'triggerCondition': 'TRIGGER_DEFAULT', 'id': '9', 'createTime': '2017-01-20T15:44:35.046525739Z', 'positionFill': 'POSITION_DEFAULT', 'units': '-100', 'type': 'LIMIT', 'stopLossOnFill': {'price': '1.22000', 'timeInForce': 'GTC'}, 'partialFill': 'DEFAULT_FILL', 'state': 'PENDING', 'price': '1.20000', 'timeInForce': 'GTC'}, {'instrument': 'EUR_USD', 'triggerCondition': 'TRIGGER_DEFAULT', 'id': '13', 'createTime': '2017-01-20T15:47:33.998386716Z', 'replacesOrderID': '11', 'positionFill': 'POSITION_DEFAULT', 'units': '-500000', 'type': 'LIMIT', 'partialFill': 'DEFAULT_FILL', 'state': 'PENDING', 'price': '1.25000', 'timeInForce': 'GTC'}, {'instrument': 'EUR_USD', 'triggerCondition': 'TRIGGER_DEFAULT', 'id': '17', 'createTime': '2017-01-20T15:47:51.120880289Z', 'replacesOrderID': '15', 'positionFill': 'POSITION_DEFAULT', 'units': '-500000', 'type': 'LIMIT', 'partialFill': 'DEFAULT_FILL', 'state': 'PENDING', 'price': '1.25000', 'timeInForce': 'GTC'}], 'positions': [{'instrument': 'EUR_USD', 'resettablePL': '-0.0086', 'short': {'unrealizedPL': '0.0000', 'resettablePL': '0.0000', 'units': '0', 'pl': '0.0000'}, 'pl': '-0.0086', 'unrealizedPL': '0.0000', 'long': {'unrealizedPL': '0.0000', 'resettablePL': '-0.0086', 'units': '0', 'pl': '-0.0086'}}, {'instrument': 'GBP_USD', 'resettablePL': '0.2866', 'short': {'unrealizedPL': '0.0000', 'resettablePL': '0.0000', 'units': '0', 'pl': '0.0000'}, 'pl': '0.2866', 'unrealizedPL': '0.0000', 'long': {'unrealizedPL': '0.0000', 'resettablePL': '0.2866', 'units': '0', 'pl': '0.2866'}}, {'instrument': 'AUD_USD', 'resettablePL': '0.0000', 'short': {'unrealizedPL': '0.0000', 'resettablePL': '0.0000', 'units': '0', 'pl': '0.0000'}, 'pl': '0.0000', 'unrealizedPL': '-0.1146', 'long': {'resettablePL': '0.0000', 'pl': '0.0000', 'unrealizedPL': '-0.1146', 'tradeIDs': ['31', '33'], 'units': '200', 'averagePrice': '0.75481'}}, {'instrument': 'NZD_USD', 'resettablePL': '0.0000', 'short': {'unrealizedPL': '0.0000', 'resettablePL': '0.0000', 'units': '0', 'pl': '0.0000'}, 'pl': '0.0000', 'unrealizedPL': '1.4736', 'long': {'resettablePL': '0.0000', 'pl': '0.0000', 'unrealizedPL': '1.4736', 'tradeIDs': ['35'], 'units': '100', 'averagePrice': '0.71532'}}], 'NAV': '100001.6616', 'withdrawalLimit': '99993.0859', 'id': '101-003-5120068-001', 'marginUsed': '8.5757', 'trades': [{'instrument': 'AUD_USD', 'realizedPL': '0.0000', 'id': '31', 'state': 'OPEN', 'initialUnits': '100', 'price': '0.75489', 'unrealizedPL': '-0.0688', 'openTime': '2017-01-20T15:58:23.903964257Z', 'financing': '0.0078', 'currentUnits': '100'}, {'instrument': 'AUD_USD', 'realizedPL': '0.0000', 'id': '33', 'state': 'OPEN', 'initialUnits': '100', 'price': '0.75473', 'unrealizedPL': '-0.0458', 'openTime': '2017-01-20T15:58:58.618457963Z', 'financing': '0.0078', 'currentUnits': '100'}, {'instrument': 'NZD_USD', 'realizedPL': '0.0000', 'id': '35', 'state': 'OPEN', 'initialUnits': '100', 'price': '0.71532', 'unrealizedPL': '1.4736', 'openTime': '2017-01-20T15:59:08.362429413Z', 'financing': '0.0090', 'currentUnits': '100'}], 'pl': '0.2780', 'unrealizedPL': '1.3590', 'marginCloseoutUnrealizedPL': '1.6262', 'createdByUserID': 5120068, 'resettablePL': '0.2780', 'alias': 'Primary', 'lastTransactionID': '57', 'marginCloseoutPercent': '0.00004', 'marginCloseoutMarginUsed': '8.5639', 'marginCloseoutPositionValue': '320.1353'}}\n"
],
[
"pd.Series(r.response['account'])",
"_____no_output_____"
]
],
[
[
"## Account List",
"_____no_output_____"
]
],
[
[
"r = accounts.AccountList()",
"_____no_output_____"
],
[
"client.request(r)",
"_____no_output_____"
],
[
"print(r.response)",
"{'accounts': [{'tags': [], 'id': '101-003-5120068-001'}]}\n"
]
],
[
[
"## Account Summary",
"_____no_output_____"
]
],
[
[
"r = accounts.AccountSummary(accountID)",
"_____no_output_____"
],
[
"client.request(r)",
"_____no_output_____"
],
[
"print(r.response)",
"{'lastTransactionID': '57', 'account': {'openTradeCount': 3, 'createdTime': '2017-01-20T14:23:22.308266448Z', 'currency': 'SGD', 'openPositionCount': 2, 'hedgingEnabled': False, 'marginCloseoutNAV': '100001.9288', 'marginAvailable': '99993.0859', 'marginRate': '0.02', 'marginCallMarginUsed': '8.5639', 'positionValue': '320.6001', 'marginCallPercent': '0.00009', 'pendingOrderCount': 3, 'balance': '100000.3026', 'NAV': '100001.6616', 'withdrawalLimit': '99993.0859', 'id': '101-003-5120068-001', 'marginUsed': '8.5757', 'pl': '0.2780', 'unrealizedPL': '1.3590', 'marginCloseoutUnrealizedPL': '1.6262', 'createdByUserID': 5120068, 'resettablePL': '0.2780', 'alias': 'Primary', 'lastTransactionID': '57', 'marginCloseoutPercent': '0.00004', 'marginCloseoutMarginUsed': '8.5639', 'marginCloseoutPositionValue': '320.1353'}}\n"
],
[
"pd.Series(r.response['account'])",
"_____no_output_____"
]
],
[
[
"## Account Instruments",
"_____no_output_____"
]
],
[
[
"r = accounts.AccountInstruments(accountID=accountID, params = \"EUR_USD\")",
"_____no_output_____"
],
[
"client.request(r)",
"_____no_output_____"
],
[
"pd.DataFrame(r.response['instruments'])",
"_____no_output_____"
]
],
[
[
"<!--NAVIGATION-->\n< [Rates Information](02.00 Rates Information.ipynb) | [Contents](Index.ipynb) | [Order Management](04.00 Order Management.ipynb) >",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7350f110ebba2e44a03c35c3e33b2cb0dcd7abf | 33,357 | ipynb | Jupyter Notebook | exercise-pipelines.ipynb | mdhasan8/Machine-Learning-in-Python | d66607d3003e8279e35cf176851f506aa833a9fe | [
"MIT"
] | null | null | null | exercise-pipelines.ipynb | mdhasan8/Machine-Learning-in-Python | d66607d3003e8279e35cf176851f506aa833a9fe | [
"MIT"
] | null | null | null | exercise-pipelines.ipynb | mdhasan8/Machine-Learning-in-Python | d66607d3003e8279e35cf176851f506aa833a9fe | [
"MIT"
] | null | null | null | 32.864039 | 456 | 0.50724 | [
[
[
"**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/pipelines).**\n\n---\n",
"_____no_output_____"
],
[
"In this exercise, you will use **pipelines** to improve the efficiency of your machine learning code.\n\n# Setup\n\nThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.",
"_____no_output_____"
]
],
[
[
"# Set up code checking\nimport os\nif not os.path.exists(\"../input/train.csv\"):\n os.symlink(\"../input/home-data-for-ml-course/train.csv\", \"../input/train.csv\") \n os.symlink(\"../input/home-data-for-ml-course/test.csv\", \"../input/test.csv\") \nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.ml_intermediate.ex4 import *\nprint(\"Setup Complete\")",
"Setup Complete\n"
]
],
[
[
"You will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course). \n\n\n\nRun the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# Read the data\nX_full = pd.read_csv('../input/train.csv', index_col='Id')\nX_test_full = pd.read_csv('../input/test.csv', index_col='Id')\n\n# Remove rows with missing target, separate target from predictors\nX_full.dropna(axis=0, subset=['SalePrice'], inplace=True)\ny = X_full.SalePrice\nX_full.drop(['SalePrice'], axis=1, inplace=True)\n\n# Break off validation set from training data\nX_train_full, X_valid_full, y_train, y_valid = train_test_split(X_full, y, \n train_size=0.8, test_size=0.2,\n random_state=0)\n\n# \"Cardinality\" means the number of unique values in a column\n# Select categorical columns with relatively low cardinality (convenient but arbitrary)\ncategorical_cols = [cname for cname in X_train_full.columns if\n X_train_full[cname].nunique() < 10 and \n X_train_full[cname].dtype == \"object\"]\n\n# Select numerical columns\nnumerical_cols = [cname for cname in X_train_full.columns if \n X_train_full[cname].dtype in ['int64', 'float64']]\n\n# Keep selected columns only\nmy_cols = categorical_cols + numerical_cols\nX_train = X_train_full[my_cols].copy()\nX_valid = X_valid_full[my_cols].copy()\nX_test = X_test_full[my_cols].copy()",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
]
],
[
[
"The next code cell uses code from the tutorial to preprocess the data and train a model. Run this code without changes.",
"_____no_output_____"
]
],
[
[
"from sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import mean_absolute_error\n\n# Preprocessing for numerical data\nnumerical_transformer = SimpleImputer(strategy='constant')\n\n# Preprocessing for categorical data\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='most_frequent')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))\n])\n\n# Bundle preprocessing for numerical and categorical data\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numerical_transformer, numerical_cols),\n ('cat', categorical_transformer, categorical_cols)\n ])\n\n# Define model\nmodel = RandomForestRegressor(n_estimators=100, random_state=0)\n\n# Bundle preprocessing and modeling code in a pipeline\nclf = Pipeline(steps=[('preprocessor', preprocessor),\n ('model', model)\n ])\n\n# Preprocessing of training data, fit model \nclf.fit(X_train, y_train)\n\n# Preprocessing of validation data, get predictions\npreds = clf.predict(X_valid)\n\nprint('MAE:', mean_absolute_error(y_valid, preds))",
"MAE: 17861.780102739725\n"
]
],
[
[
"The code yields a value around 17862 for the mean absolute error (MAE). In the next step, you will amend the code to do better.\n\n# Step 1: Improve the performance\n\n### Part A\n\nNow, it's your turn! In the code cell below, define your own preprocessing steps and random forest model. Fill in values for the following variables:\n- `numerical_transformer`\n- `categorical_transformer`\n- `model`\n\nTo pass this part of the exercise, you need only define valid preprocessing steps and a random forest model.",
"_____no_output_____"
]
],
[
[
"# Preprocessing for numerical data\nnumerical_transformer = SimpleImputer(strategy='constant') # Your code here\n\n# Preprocessing for categorical data\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='most_frequent')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))\n]) # Your code here\n\n# Bundle preprocessing for numerical and categorical data\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numerical_transformer, numerical_cols),\n ('cat', categorical_transformer, categorical_cols)\n ])\n\n# Define model\nmodel = RandomForestRegressor(n_estimators=200, random_state=0) # Your code here\n\n# Check your answer\nstep_1.a.check()",
"_____no_output_____"
],
[
"# Lines below will give you a hint or solution code\n#step_1.a.hint()\n#step_1.a.solution()",
"_____no_output_____"
]
],
[
[
"### Part B\n\nRun the code cell below without changes.\n\nTo pass this step, you need to have defined a pipeline in **Part A** that achieves lower MAE than the code above. You're encouraged to take your time here and try out many different approaches, to see how low you can get the MAE! (_If your code does not pass, please amend the preprocessing steps and model in Part A._)",
"_____no_output_____"
]
],
[
[
"# Bundle preprocessing and modeling code in a pipeline\nmy_pipeline = Pipeline(steps=[('preprocessor', preprocessor),\n ('model', model)\n ])\n\n# Preprocessing of training data, fit model \nmy_pipeline.fit(X_train, y_train)\n\n# Preprocessing of validation data, get predictions\npreds = my_pipeline.predict(X_valid)\n\n# Evaluate the model\nscore = mean_absolute_error(y_valid, preds)\nprint('MAE:', score)\n\n# Check your answer\nstep_1.b.check()",
"MAE: 17600.602294520544\n"
],
[
"# Line below will give you a hint\nstep_1.b.hint()",
"_____no_output_____"
]
],
[
[
"# Step 2: Generate test predictions\n\nNow, you'll use your trained model to generate predictions with the test data.",
"_____no_output_____"
]
],
[
[
"# Preprocessing of test data, fit model\npreds_test = my_pipeline.predict(X_test) # Your code here\n\n# Check your answer\nstep_2.check()",
"_____no_output_____"
],
[
"# Lines below will give you a hint or solution code\n#step_2.hint()\n#step_2.solution()",
"_____no_output_____"
]
],
[
[
"Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition.",
"_____no_output_____"
]
],
[
[
"# Save test predictions to file\noutput = pd.DataFrame({'Id': X_test.index,\n 'SalePrice': preds_test})\noutput.to_csv('submission.csv', index=False)",
"_____no_output_____"
]
],
[
[
"# Submit your results\n\nOnce you have successfully completed Step 2, you're ready to submit your results to the leaderboard! If you choose to do so, make sure that you have already joined the competition by clicking on the **Join Competition** button at [this link](https://www.kaggle.com/c/home-data-for-ml-course). \n1. Begin by clicking on the blue **Save Version** button in the top right corner of the window. This will generate a pop-up window. \n2. Ensure that the **Save and Run All** option is selected, and then click on the blue **Save** button.\n3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.\n4. Click on the **Output** tab on the right of the screen. Then, click on the file you would like to submit, and click on the blue **Submit** button to submit your results to the leaderboard.\n\nYou have now successfully submitted to the competition!\n\nIf you want to keep working to improve your performance, select the blue **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work.\n\n\n# Keep going\n\nMove on to learn about [**cross-validation**](https://www.kaggle.com/alexisbcook/cross-validation), a technique you can use to obtain more accurate estimates of model performance!",
"_____no_output_____"
],
[
"---\n\n\n\n\n*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161289) to chat with other Learners.*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7351492c9af0334d12fc8231c0af936af011487 | 2,576 | ipynb | Jupyter Notebook | ZCU111/packages/xrfclk/pkg/test/test_chips.ipynb | yunqu/ZCU111-PYNQ | bcc63ec50c4ab7b5baf12124b4e2ddc050eb3914 | [
"BSD-3-Clause"
] | null | null | null | ZCU111/packages/xrfclk/pkg/test/test_chips.ipynb | yunqu/ZCU111-PYNQ | bcc63ec50c4ab7b5baf12124b4e2ddc050eb3914 | [
"BSD-3-Clause"
] | null | null | null | ZCU111/packages/xrfclk/pkg/test/test_chips.ipynb | yunqu/ZCU111-PYNQ | bcc63ec50c4ab7b5baf12124b4e2ddc050eb3914 | [
"BSD-3-Clause"
] | null | null | null | 21.289256 | 90 | 0.524457 | [
[
[
"!modprobe i2c-dev",
"_____no_output_____"
],
[
"!pip3 uninstall -y xrfclk\n!pwd\n!make clean\n!make all\n!make install",
"_____no_output_____"
],
[
"import xrfclk",
"_____no_output_____"
]
],
[
[
"Import the registers from TICs Pro generated \\*.txt file:",
"_____no_output_____"
]
],
[
[
"import csv\n_lmk04832Config = []\nwith open(\"./clk_configs/LMK04832_clk1_clk2_16MHz.txt\", newline='') as csvfile:\n spamreader = csv.reader(csvfile, delimiter='\\t')\n for row in spamreader:\n _lmk04832Config.append(int(row[1],16))",
"_____no_output_____"
],
[
"xrfclk._clear_int()",
"_____no_output_____"
],
[
"xrfclk._write_Lmk04832Regs_regs(_lmk04832Config)",
"_____no_output_____"
]
],
[
[
"Test to run through all possible configurations for Status_LD2 on LMK04832:",
"_____no_output_____"
]
],
[
[
"xrfclk._clear_int()\nfrom time import sleep\nfor TYPE in [TYPE for TYPE in range(3,7) if TYPE != 5]:\n for MUX in [MUX for MUX in range(0, 19) if MUX != 6]:\n Status_LD2 = (MUX << 3) + TYPE\n Status_LD2_REG = hex((0x16E << 8) + Status_LD2)\n _lmk04832Config[116] = int(Status_LD2_REG, 16)\n xrfclk._write_Lmk04832Regs_regs(_lmk04832Config)\n xrfclk._clear_int()\n sleep(0.1)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e73518bf59e15c53222a93694d4eb119c63acf06 | 18,427 | ipynb | Jupyter Notebook | julia/dev/TrajOpt01.ipynb | dehann/ThermalSoaring | b5f0bbd0be7d599a43f08546171ebac89b7a9333 | [
"MIT"
] | 5 | 2015-07-25T11:17:35.000Z | 2020-06-08T20:10:54.000Z | julia/dev/TrajOpt01.ipynb | dehann/ThermalSoaring | b5f0bbd0be7d599a43f08546171ebac89b7a9333 | [
"MIT"
] | null | null | null | julia/dev/TrajOpt01.ipynb | dehann/ThermalSoaring | b5f0bbd0be7d599a43f08546171ebac89b7a9333 | [
"MIT"
] | 4 | 2015-07-25T11:41:55.000Z | 2019-11-03T22:40:44.000Z | 105.902299 | 14,001 | 0.840994 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e73525e951799e635430d4166fd74edc74309714 | 637 | ipynb | Jupyter Notebook | Components/POP/tutorial1.ipynb | NCAR/CESM-Lab-Tutorial | 44d3683f16d840ac9e6c51febb4545b9e8f56256 | [
"Apache-2.0"
] | 2 | 2021-01-14T20:54:35.000Z | 2021-01-14T20:55:14.000Z | Components/POP/tutorial1.ipynb | NCAR/CESM-Lab-Tutorial | 44d3683f16d840ac9e6c51febb4545b9e8f56256 | [
"Apache-2.0"
] | 1 | 2022-02-10T15:55:52.000Z | 2022-02-10T15:55:52.000Z | Components/POP/tutorial1.ipynb | NCAR/CESM-Lab-Tutorial | 44d3683f16d840ac9e6c51febb4545b9e8f56256 | [
"Apache-2.0"
] | 1 | 2020-11-18T22:45:57.000Z | 2020-11-18T22:45:57.000Z | 18.2 | 50 | 0.536892 | [
[
[
"## POP Tutorial - Basics\n\nThis will be a POP-focused tutorial.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
e7352d6f69ae7bb790424ff74b81a9407fc0fe73 | 253,582 | ipynb | Jupyter Notebook | glove/large_abstract_glove.ipynb | garrett361/arxiv-vixra-ml | 829629927aafead91a8ba582c8caa1df922b1821 | [
"Apache-2.0"
] | 1 | 2021-11-24T17:29:22.000Z | 2021-11-24T17:29:22.000Z | glove/large_abstract_glove.ipynb | garrett361/arxiv-vixra-ml | 829629927aafead91a8ba582c8caa1df922b1821 | [
"Apache-2.0"
] | null | null | null | glove/large_abstract_glove.ipynb | garrett361/arxiv-vixra-ml | 829629927aafead91a8ba582c8caa1df922b1821 | [
"Apache-2.0"
] | null | null | null | 126,791 | 253,581 | 0.759719 | [
[
[
"# GloVe\nUsing the large abstract data encoded with the balanced title tokens.",
"_____no_output_____"
],
[
"# Imports and Setup\n\nCommon imports and standardized code for importing the relevant data, models, etc., in order to minimize copy-paste/typo errors.",
"_____no_output_____"
],
[
"Imports and colab setup",
"_____no_output_____"
]
],
[
[
"%%capture import_capture --no-stder\n# Jupyter magic methods\n# For auto-reloading when external modules are changed\n%load_ext autoreload\n%autoreload 2\n# For showing plots inline\n%matplotlib inline\n\n# pip installs needed in Colab for arxiv_vixra_models\n!pip install wandb\n!pip install pytorch-lightning\n!pip install unidecode\n# Update sklearn\n!pip uninstall scikit-learn -y\n!pip install -U scikit-learn\n\nfrom copy import deepcopy\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\npd.set_option(u'float_format', '{:f}'.format)\nimport pytorch_lightning as pl\nfrom pytorch_lightning import Trainer\nfrom pytorch_lightning.loggers import WandbLogger\nfrom pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor\nimport seaborn as sns\nimport torch\nimport wandb\n",
"_____no_output_____"
]
],
[
[
"`wandb` log in:",
"_____no_output_____"
]
],
[
[
"wandb.login()",
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mgarrett361\u001b[0m (use `wandb login --relogin` to force relogin)\n"
]
],
[
[
"Google drive access",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount(\"/content/drive\", force_remount=True)\n# Enter the relevant foldername\nFOLDERNAME = '/content/drive/My Drive/ML/arxiv_vixra'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n# For importing modules stored in FOLDERNAME or a subdirectory thereof:\nimport sys\nsys.path.append(FOLDERNAME)",
"Mounted at /content/drive\n"
],
[
"import arxiv_vixra_models as avm",
"_____no_output_____"
],
[
"notebook_model = avm.LitGloVe",
"_____no_output_____"
]
],
[
[
"Copy data to cwd for speed.",
"_____no_output_____"
]
],
[
[
"SUBDIR = '/data/data_splits/'\ntitle_tokens_file_name = 'balanced_title_normalized_vocab.feather'\n!cp '{FOLDERNAME + SUBDIR + title_tokens_file_name}' .\ntitle_tokens_df = pd.read_feather(title_tokens_file_name)\nwith open(FOLDERNAME + SUBDIR + 'heatmap_words.txt', 'r') as f:\n heatmap_words = f.read().split()\nwith open(FOLDERNAME + SUBDIR + 'pca_words.txt', 'r') as f:\n pca_words =f.read().split()\nwith open(FOLDERNAME + SUBDIR + 'tsne_words.txt', 'r') as f:\n tsne_words = f.read().split()",
"_____no_output_____"
]
],
[
[
"Computing specs. Save the number of processors to pass as `num_workers` into the Datamodule and cuda availability for other flags.",
"_____no_output_____"
]
],
[
[
"# GPU. Save availability to IS_CUDA_AVAILABLE.\ngpu_info= !nvidia-smi\ngpu_info = '\\n'.join(gpu_info)\nif gpu_info.find('failed') >= 0:\n print('Not connected to a GPU')\n IS_CUDA_AVAILABLE = False\nelse:\n print(f\"GPU\\n{50 * '-'}\\n\", gpu_info, '\\n')\n IS_CUDA_AVAILABLE = True\n\n# Memory.\nfrom psutil import virtual_memory, cpu_count\nram_gb = virtual_memory().total / 1e9\nprint(f\"Memory\\n{50 * '-'}\\n\", 'Your runtime has {:.1f} gigabytes of available RAM\\n'.format(ram_gb), '\\n')\n\n# CPU.\nprint(f\"CPU\\n{50 * '-'}\\n\", f'CPU Processors: {cpu_count()}')\n# Determine the number of workers to use in the datamodule\nNUM_PROCESSORS = cpu_count()",
"GPU\n--------------------------------------------------\n Thu Jan 20 16:54:24 2022 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 495.46 Driver Version: 460.32.03 CUDA Version: 11.2 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |\n| N/A 32C P0 24W / 300W | 0MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+ \n\nMemory\n--------------------------------------------------\n Your runtime has 54.8 gigabytes of available RAM\n \n\nCPU\n--------------------------------------------------\n CPU Processors: 8\n"
],
[
"from requests import get\nPROJECT = get('http://172.28.0.2:9000/api/sessions').json()[0]['name']\nPROJECT = PROJECT.replace('.ipynb', '').replace('Kopie%20van%20', '').replace('Copy%20of%20', '')\nprint(PROJECT)\nENTITY = 'garrett361'",
"large_abstract_glove\n"
]
],
[
[
"Create the mapping from words to indices and vice-versa, recalling that 0 and 1 are reserved for padding and `<UNK>`, respectively.",
"_____no_output_____"
]
],
[
[
"title_word_to_idx = avm.word_to_idx_dict_from_df(title_tokens_df)\ntitle_idx_to_word = avm.idx_to_word_dict_from_df(title_tokens_df)",
"_____no_output_____"
]
],
[
[
"Load in the relevant co-occurence matrix:",
"_____no_output_____"
]
],
[
[
"co_matrix = torch.load(FOLDERNAME + SUBDIR + \"large_abstract_with_title_mapping_co_matrix_context_5.pt\")",
"_____no_output_____"
]
],
[
[
"# Model Training\n\nSetting hyperparameters and performing a `wandb`-synced training loop.",
"_____no_output_____"
]
],
[
[
"cyclic_lr_scheduler_args = {'base_lr': 5e-5,\n 'max_lr': 5e-2,\n 'step_size_up': 128,\n 'cycle_momentum': False}\nplateau_lr_scheduler_args = {'verbose': True,\n 'patience': 2,\n 'factor': .75}\nmodel_args_dict = {'co_matrix_sparse': co_matrix,\n 'batch_size': 2 ** 21,\n 'num_workers': NUM_PROCESSORS,\n 'pin_memory': IS_CUDA_AVAILABLE,\n 'persistent_workers': True,\n 'save_models_to_wandb': True,\n 'embedding_dim': 256,\n 'lr': 5e-2,\n 'lr_scheduler': 'cyclic',\n 'lr_scheduler_args': cyclic_lr_scheduler_args,\n 'lr_scheduler_interval': 'step'\n }\nmodel = notebook_model(**model_args_dict)",
"_____no_output_____"
]
],
[
[
"Training:",
"_____no_output_____"
]
],
[
[
"trainer = Trainer(logger=WandbLogger(),\n gpus=-1 if IS_CUDA_AVAILABLE else 0,\n log_every_n_steps=1,\n precision=16,\n profiler='simple',\n callbacks=[avm.WandbVisualEmbeddingCallback(model=model,\n heatmap_words=heatmap_words,\n pca_words=pca_words,\n tsne_words=tsne_words,\n word_to_idx_dict=title_word_to_idx,\n idx_to_word_dict=title_idx_to_word,\n k=5,\n heatmap_title=f'{PROJECT} Cosine Heatmap',\n pca_title=f'{PROJECT} PCA',\n tsne_title=f'{PROJECT} t-SNE',\n ),\n LearningRateMonitor()\n ])\nwith wandb.init(project=PROJECT) as run:\n run.name = f\"lr_{model.hparams['lr']}_scheduler_{model_args_dict.get('lr_scheduler', None)}\"\n trainer.fit(model)\n plt.close(\"all\")\n",
"Using 16bit native Automatic Mixed Precision (AMP)\nGPU available: True, used: True\nTPU available: False, using: 0 TPU cores\nIPU available: False, using: 0 IPUs\n"
]
],
[
[
"# Loading Best Models",
"_____no_output_____"
]
],
[
[
"wandb_api = wandb.Api()\nnotebook_runs = wandb_api.runs(ENTITY + \"/\" + PROJECT)\nrun_cats = ('best_loss', 'name', 'wandb_path', 'timestamp')\nruns_sort_cat = 'best_loss'\nrun_state_dict_file_name = 'glove.pt'\nrun_init_params_file_name = 'model_init_params.pt'\n\nnotebook_runs_dict = {key: [] for key in run_cats}\n\nfor run in notebook_runs:\n run_json = run.summary._json_dict\n if runs_sort_cat in run_json:\n notebook_runs_dict[runs_sort_cat].append(run_json[runs_sort_cat])\n notebook_runs_dict['name'].append(run.name)\n notebook_runs_dict['wandb_path'].append('/'.join(run.path))\n notebook_runs_dict['timestamp'].append(run_json['_timestamp'])\n# See top runs:\nnotebook_runs_df = pd.DataFrame(notebook_runs_dict).sort_values(by=runs_sort_cat, ascending=True).reset_index(drop=True)\nbest_model_wandb_path = notebook_runs_df.iloc[0]['wandb_path']\ndisplay(notebook_runs_df)\n# Write state dict and init params to final models folder.\n!cp \"{run_state_dict_file_name}\" \"{FOLDERNAME + '/final_models/' + PROJECT + '_state_dict.pt'}\"\n!cp \"{run_init_params_file_name}\" \"{FOLDERNAME + '/final_models/' + PROJECT + '_init_params.pt'}\"\n# Restore best model.\nwandb.restore(run_state_dict_file_name, run_path = best_model_wandb_path, replace=True)\nwandb.restore(run_init_params_file_name, run_path = best_model_wandb_path, replace=True)\nbest_model_state_dict = torch.load(run_state_dict_file_name)\nbest_model_init_params = torch.load(run_init_params_file_name)\nbest_model = notebook_model(**best_model_init_params)\nbest_model.load_state_dict(torch.load(run_state_dict_file_name))",
"_____no_output_____"
]
],
[
[
"Save the state dicts locally and rebuild the corresponding models.",
"_____no_output_____"
]
],
[
[
"# wandb stores None values in the config dict as a string literal. Need to\n# fix these entries, annoyingly.\nfor key, val in best_model_df.config.items():\n if val == 'None':\n best_model_df.config[key] = None\n# Write to disk\nglove_file_name = f\"glove_dim_{best_model_df.config['embedding_dim']}.pt\"\nwandb.restore(glove_file_name,\n run_path=best_model_df.wandb_path,\n replace=True)\nglove_file_name_suffix = '_'.join(glove_file_name.split('_')[-2:])\n# Also copy to the final_models folder\n!cp '{glove_file_name}' \"{FOLDERNAME + '/final_models/' + PROJECT + '_' + glove_file_name_suffix}\"",
"_____no_output_____"
],
[
"best_model = notebook_model(**{**best_model_df.config, **{'co_matrix': co_matrix}})\nbest_model.load_state_dict(torch.load(glove_file_name))",
"_____no_output_____"
]
],
[
[
"# Visualize",
"_____no_output_____"
]
],
[
[
"heatmap = avm.embedding_cosine_heatmap(model=best_model,\n words=heatmap_words,\n word_to_idx=title_word_to_idx)",
"_____no_output_____"
],
[
"pca = avm.pca_3d_embedding_plotter_topk(model=best_model,\n words=pca_words,\n word_to_idx=title_word_to_idx,\n idx_to_word=title_idx_to_word,\n title='PCA',\n k=5)",
"_____no_output_____"
],
[
"tsne = avm.tsne_3d_embedding_plotter_topk(model=best_model,\n words=tsne_words,\n word_to_idx=title_word_to_idx,\n idx_to_word=title_idx_to_word,\n title='t-SNE',\n k=5)",
"_____no_output_____"
],
[
"pca.show()",
"_____no_output_____"
],
[
"tsne.show()",
"_____no_output_____"
],
[
"avm.embedding_utils.topk_analogies_df(best_model,\n 'newton mechanics heisenberg'.split(),\n title_word_to_idx,\n title_idx_to_word)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e73535218e03422d09be5fc0d9075f3150e484ab | 42,915 | ipynb | Jupyter Notebook | causal-inference-for-the-brave-and-true/12-Doubly-Robust-Estimation.ipynb | qiringji/python-causality-handbook | add5ab57a8e755242bdbc3d4d0ee00867f6a1e55 | [
"MIT"
] | 1 | 2021-07-07T03:57:54.000Z | 2021-07-07T03:57:54.000Z | causal-inference-for-the-brave-and-true/12-Doubly-Robust-Estimation.ipynb | adebayoj/python-causality-handbook | bae5790bba173c89dedacbe6bcd3d65c1dc20a07 | [
"MIT"
] | null | null | null | causal-inference-for-the-brave-and-true/12-Doubly-Robust-Estimation.ipynb | adebayoj/python-causality-handbook | bae5790bba173c89dedacbe6bcd3d65c1dc20a07 | [
"MIT"
] | null | null | null | 70.008157 | 18,080 | 0.734382 | [
[
[
"# 12 - Doubly Robust Estimation\n\n## Don't Put All your Eggs in One Basket\n\nWe've learned how to use linear regression and propensity score weighting to estimate \\\\(E[Y|Y=1] - E[Y|Y=0] | X\\\\). But which one should we use and when? When in doubt, just use both! Doubly Robust Estimation is a way of combining propensity score and linear regression in a way you don't have to rely on either of them. \n\nTo see how this works, let's consider the mindset experiment. It is a randomised study conducted in U.S. public high schools which aims at finding the impact of a growth mindset. The way it works is that students receive from the school a seminar to instil in them a growth mindset. Then, they follow up with the students in their college years to measure how well they performed academically. This measurement was compiled into an achievement score and standardised. The real data on this study is not publicly available in order to preserve students' privacy. However, we have a simulated dataset with the same statistical properties provided by [Athey and Wager](https://arxiv.org/pdf/1902.07409.pdf), so we will use that instead.",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import style\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\n\nstyle.use(\"fivethirtyeight\")\npd.set_option(\"display.max_columns\", 6)",
"_____no_output_____"
],
[
"data = pd.read_csv(\"./data/learning_mindset.csv\")\ndata.sample(5, random_state=5)",
"_____no_output_____"
]
],
[
[
"Although the study was randomised, it doesn't seem to be the case that this data is free from confounding. One possible reason for this is that the treatment variable is measured by the student's receipt of the seminar. So, although the opportunity to participate was random, participation is not. We are dealing with a case of non-compliance here. One evidence of this is how the student's success expectation is correlated with the participation in the seminar. Students with higher self-reported high expectations are more likely to have joined the growth mindset seminar.",
"_____no_output_____"
]
],
[
[
"data.groupby(\"success_expect\")[\"intervention\"].mean()",
"_____no_output_____"
]
],
[
[
"As we know by now, we could adjust for this using a linear regression or by estimating a propensity score model with a logistic regression. Before we do that, however, we need to convert the categorical variables to dummies.",
"_____no_output_____"
]
],
[
[
"categ = [\"ethnicity\", \"gender\", \"school_urbanicity\"]\ncont = [\"school_mindset\", \"school_achievement\", \"school_ethnic_minority\", \"school_poverty\", \"school_size\"]\n\ndata_with_categ = pd.concat([\n data.drop(columns=categ), # dataset without the categorical features\n pd.get_dummies(data[categ], columns=categ, drop_first=False) # categorical features converted to dummies\n], axis=1)\n\nprint(data_with_categ.shape)",
"(10391, 32)\n"
]
],
[
[
"We are now ready to understand how doubly robust estimation works.\n\n## Doubly Robust Estimation\n\n\n\nInstead of deriving the estimator, I'll first show it to you and only then tell why it is awesome.\n\n$\n\\hat{ATE} = \\frac{1}{N}\\sum \\bigg( \\dfrac{T_i(Y_i - \\hat{\\mu_1}(X_i))}{\\hat{P}(X_i)} + \\hat{\\mu_1}(X_i) \\bigg) - \\frac{1}{N}\\sum \\bigg( \\dfrac{(1-T_i)(Y_i - \\hat{\\mu_0}(X_i))}{1-\\hat{P}(X_i)} + \\hat{\\mu_0}(X_i) \\bigg)\n$\n\nwhere \\\\(\\hat{P}(x)\\\\) is an estimation of the propensity score (using logistic regression, for example), \\\\(\\hat{\\mu_1}(x)\\\\) is an estimation of \\\\(E[Y|X, T=1]\\\\) (using linear regression, for example), and \\\\(\\hat{\\mu_0}(x)\\\\) is an estimation of \\\\(E[Y|X, T=0]\\\\). As you might have already guessed, the first part of the doubly robust estimator estimates \\\\(E[Y_1]\\\\) and the second part estimates \\\\(E[Y_0]\\\\). Let's examine the first part, as all the intuition will also apply to the second part by analogy.\n\nSince I know that this formula is scary at first (but don't worry, you will see it is super simple), I will first show how to code this estimator. I have the feeling that some people are less frightened by code than by formulas. Let's see how this estimator works in practice, shall we?",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression, LinearRegression\n\ndef doubly_robust(df, X, T, Y):\n ps = LogisticRegression(C=1e6).fit(df[X], df[T]).predict_proba(df[X])[:, 1]\n mu0 = LinearRegression().fit(df.query(f\"{T}==0\")[X], df.query(f\"{T}==0\")[Y]).predict(df[X])\n mu1 = LinearRegression().fit(df.query(f\"{T}==1\")[X], df.query(f\"{T}==1\")[Y]).predict(df[X])\n return (\n np.mean(df[T]*(df[Y] - mu1)/ps + mu1) -\n np.mean((1-df[T])*(df[Y] - mu0)/(1-ps) + mu0)\n )",
"_____no_output_____"
],
[
"T = 'intervention'\nY = 'achievement_score'\nX = data_with_categ.columns.drop(['schoolid', T, Y])\n\ndoubly_robust(data_with_categ, X, T, Y)",
"_____no_output_____"
]
],
[
[
"Doubly robust estimator is saying that we should expect individuals who attended the mindset seminar to be 0.388 standard deviations above their untreated fellows, in terms of achievements. Once again, we can use bootstrap to construct confidence intervals.",
"_____no_output_____"
]
],
[
[
"from joblib import Parallel, delayed # for parallel processing\n\nnp.random.seed(88)\n# run 1000 bootstrap samples\nbootstrap_sample = 1000\nates = Parallel(n_jobs=4)(delayed(doubly_robust)(data_with_categ.sample(frac=1, replace=True), X, T, Y)\n for _ in range(bootstrap_sample))\nates = np.array(ates)",
"_____no_output_____"
],
[
"print(f\"ATE 95% CI:\", (np.percentile(ates, 2.5), np.percentile(ates, 97.5)))",
"ATE 95% CI: (0.3536507259630512, 0.4197834129772669)\n"
],
[
"sns.distplot(ates, kde=False)\nplt.vlines(np.percentile(ates, 2.5), 0, 20, linestyles=\"dotted\")\nplt.vlines(np.percentile(ates, 97.5), 0, 20, linestyles=\"dotted\", label=\"95% CI\")\nplt.title(\"ATE Bootstrap Distribution\")\nplt.legend();",
"_____no_output_____"
]
],
[
[
"Now that we got a taste of the doubly robust estimator, let's examine why it is so great. First, it is called doubly robust because it only requires one of the models, \\\\(\\hat{P}(x)\\\\) or \\\\(\\hat{\\mu}(x)\\\\), to be correctly specified. To see this, take the first part that estimates \\\\(E[Y_1]\\\\) and take a good look at it.\n\n$\n\\hat{E}[Y_1] = \\frac{1}{N}\\sum \\bigg( \\dfrac{T_i(Y_i - \\hat{\\mu_1}(X_i))}{\\hat{P}(X_i)} + \\hat{\\mu_1}(X_i) \\bigg)\n$\n\nAssume that \\\\(\\hat{\\mu_1}(x)\\\\) is correct. If the propensity score model is wrong, we wouldn't need to worry. Because if \\\\(\\hat{\\mu_1}(x)\\\\) is correct, then \\\\(E[T_i(Y_i - \\hat{\\mu_1}(X_i))]=0\\\\). That is because the multiplication by \\\\(T_i\\\\) selects only the treated and the residual of \\\\(\\hat{\\mu_1}\\\\) on the treated have, by definition, mean zero. This causes the whole thing to reduce to \\\\(\\hat{\\mu_1}(X_i)\\\\), which is correctly estimated \\\\(E[Y_1]\\\\) by assumption. So, you see, that by being correct, \\\\(\\hat{\\mu_1}(X_i)\\\\) wipes out the relevance of the propensity score model. We can apply the same reasoning to understand the estimator of \\\\(E[Y_0]\\\\). \n\nBut don't take my word for it. Let the code show you the way! In the following estimator, I've replaced the logistic regression that estimates the propensity score by a random uniform variable that goes from 0.1 to 0.9 (I don't want very small weights to blow up my propensity score variance). Since this is random, there is no way it is a good propensity score model, but we will see that the doubly robust estimator still manages to produce an estimation that is very close to when the propensity score was estimated with logistic regression.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression, LinearRegression\n\ndef doubly_robust_wrong_ps(df, X, T, Y):\n # wrong PS model\n np.random.seed(654)\n ps = np.random.uniform(0.1, 0.9, df.shape[0])\n mu0 = LinearRegression().fit(df.query(f\"{T}==0\")[X], df.query(f\"{T}==0\")[Y]).predict(df[X])\n mu1 = LinearRegression().fit(df.query(f\"{T}==1\")[X], df.query(f\"{T}==1\")[Y]).predict(df[X])\n return (\n np.mean(df[T]*(df[Y] - mu1)/ps + mu1) -\n np.mean((1-df[T])*(df[Y] - mu0)/(1-ps) + mu0)\n )",
"_____no_output_____"
],
[
"doubly_robust_wrong_ps(data_with_categ, X, T, Y)",
"_____no_output_____"
]
],
[
[
"If we use bootstrap, we can see that the variance is slightly higher than when the propensity score was estimated with a logistic regression.",
"_____no_output_____"
]
],
[
[
"np.random.seed(88)\nparallel_fn = delayed(doubly_robust_wrong_ps)\nwrong_ps = Parallel(n_jobs=4)(parallel_fn(data_with_categ.sample(frac=1, replace=True), X, T, Y)\n for _ in range(bootstrap_sample))\nwrong_ps = np.array(wrong_ps)",
"_____no_output_____"
],
[
"print(f\"ATE 95% CI:\", (np.percentile(ates, 2.5), np.percentile(ates, 97.5)))",
"ATE 95% CI: (0.3536507259630512, 0.4197834129772669)\n"
]
],
[
[
"This covers the case that the propensity model is wrong but the outcome model is correct. What about the other situation? Let's again take a good look at the first part of the estimator, but let's rearrange some terms\n\n$\n\\hat{E}[Y_1] = \\frac{1}{N}\\sum \\bigg( \\dfrac{T_i(Y_i - \\hat{\\mu_1}(X_i))}{\\hat{P}(X_i)} + \\hat{\\mu_1}(X_i) \\bigg)\n$\n\n$\n\\hat{E}[Y_1] = \\frac{1}{N}\\sum \\bigg( \\dfrac{T_iY_i}{\\hat{P}(X_i)} - \\dfrac{T_i\\hat{\\mu_1}(X_i)}{\\hat{P}(X_i)} + \\hat{\\mu_1}(X_i) \\bigg)\n$\n\n$\n\\hat{E}[Y_1] = \\frac{1}{N}\\sum \\bigg( \\dfrac{T_iY_i}{\\hat{P}(X_i)} - \\bigg(\\dfrac{T_i}{\\hat{P}(X_i)} - 1\\bigg) \\hat{\\mu_1}(X_i) \\bigg)\n$\n\n$\n\\hat{E}[Y_1] = \\frac{1}{N}\\sum \\bigg( \\dfrac{T_iY_i}{\\hat{P}(X_i)} - \\bigg(\\dfrac{T_i - \\hat{P}(X_i)}{\\hat{P}(X_i)}\\bigg) \\hat{\\mu_1}(X_i) \\bigg)\n$\n\nNow, assume that the propensity score \\\\(\\hat{P}(X_i)\\\\) is correctly specified. In this case, \\\\(E[T_i - \\hat{P}(X_i)]=0\\\\), which wipes out the part dependent on \\\\(\\hat{\\mu_1}(X_i)\\\\). This makes the doubly robust estimator reduce to the propensity score weighting estimator \\\\(\\frac{T_iY_i}{\\hat{P}(X_i)}\\\\), which is correct by assumption. So, even if the \\\\(\\hat{\\mu_1}(X_i)\\\\) is wrong, the estimator will still be correct, provided that the propensity score is correctly specified.\n\nOnce again, if you believe more in code than in formulas, here it is the practical verification. In the code below, I've replaced both regression models with a random normal variable. There is no doubt that \\\\(\\hat{\\mu}(X_i)\\\\) is **not correctly specified**. Still, we will see that doubly robust estimation still manages to recover the same \\\\(\\hat{ATE}\\\\) of about 0.38 that we've seen before.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression, LinearRegression\n\ndef doubly_robust_wrong_model(df, X, T, Y):\n np.random.seed(654)\n ps = LogisticRegression(C=1e6).fit(df[X], df[T]).predict_proba(df[X])[:, 1]\n \n # wrong mu(x) model\n mu0 = np.random.normal(0, 1, df.shape[0])\n mu1 = np.random.normal(0, 1, df.shape[0])\n return (\n np.mean(df[T]*(df[Y] - mu1)/ps + mu1) -\n np.mean((1-df[T])*(df[Y] - mu0)/(1-ps) + mu0)\n )",
"_____no_output_____"
],
[
"doubly_robust_wrong_model(data_with_categ, X, T, Y)",
"_____no_output_____"
]
],
[
[
"One again, we can use bootstrap and see that the variance is just slightly higher.",
"_____no_output_____"
]
],
[
[
"np.random.seed(88)\nparallel_fn = delayed(doubly_robust_wrong_model)\nwrong_mux = Parallel(n_jobs=4)(parallel_fn(data_with_categ.sample(frac=1, replace=True), X, T, Y)\n for _ in range(bootstrap_sample))\nwrong_mux = np.array(wrong_mux)",
"_____no_output_____"
],
[
"print(f\"ATE 95% CI:\", (np.percentile(ates, 2.5), np.percentile(ates, 97.5)))",
"ATE 95% CI: (0.3536507259630512, 0.4197834129772669)\n"
]
],
[
[
"I hope I've convinced you about the power of doubly robust estimation. Its magic happens because in causal inference, there are two ways to remove bias from our causal estimates: you either model the treatment mechanism or the outcome mechanism. If either of these models are correct, you are good to go.\n\nOne caveat is that, in practice, it's very hard to model precisely either of those. More often, what ends up happening is that neither the propensity score nor the outcome model are 100% correct. They are both wrong, but in different ways. When this happens, it is not exactly settled [\\[1\\]](https://www.stat.cmu.edu/~ryantibs/journalclub/kang_2007.pdf) [\\[2\\]](https://arxiv.org/pdf/0804.2969.pdf) [\\[3\\]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2798744/) if it's better to use a single model or doubly robust estimation. As for me, I still like using them because at least it gives me two possibilities of being correct. \n\n\n## Keys Ideas\n\nHere, we saw a simple way of combining linear regression with the propensity score to produce a doubly robust estimator. This estimator bears that name because it only requires one of the models to be correct. If the propensity score model is correct, we will be able to identify the causal effect even if the outcome model is wrong. On the flip side, if the outcome model is correct, we will also be able to identify the causal effect even if the propensity score model is wrong.\n\n## References\n\nI like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.\n* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)\n* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)\n\nI'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.\n\n* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)\n* [Mastering 'Metrics](https://www.masteringmetrics.com/)\n\nMy final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.\n\n* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)\n\nThe data that we used was taken from the article [Estimating Treatment Effects with Causal Forests: An Application](https://arxiv.org/pdf/1902.07409.pdf), by Susan Athey and Stefan Wager. \n\n## Contribute\n\nCausal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.\nIf you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e73552ebcdd02fd68154453b23cad1192ca9752e | 15,857 | ipynb | Jupyter Notebook | dd_1/Part 4/Section 06 - Single Inheritance/02 - The object Class.ipynb | rebekka-halal/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | null | null | null | dd_1/Part 4/Section 06 - Single Inheritance/02 - The object Class.ipynb | rebekka-halal/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | null | null | null | dd_1/Part 4/Section 06 - Single Inheritance/02 - The object Class.ipynb | rebekka-halal/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | null | null | null | 19.081829 | 250 | 0.484518 | [
[
[
"### The `object` Class",
"_____no_output_____"
],
[
"As we discussed earlier, `object` is a built-in Python **class**, and every class in Python inherits from that class.",
"_____no_output_____"
]
],
[
[
"type(object)",
"_____no_output_____"
]
],
[
[
"As you can see the type of `object` is `type` - this means it is a class, just like `int`, `str`, `dict` are also classes (types):",
"_____no_output_____"
]
],
[
[
"type(int), type(str), type(dict)",
"_____no_output_____"
]
],
[
[
"When we create a class that does not explicitly inherit from anything, we are implicitly inheriting from `object`:",
"_____no_output_____"
]
],
[
[
"class Person:\n pass",
"_____no_output_____"
],
[
"issubclass(Person, object)",
"_____no_output_____"
]
],
[
[
"And it's not just our custom classes that inherit from `object`, every type in Python does too:",
"_____no_output_____"
]
],
[
[
"issubclass(int, object)",
"_____no_output_____"
]
],
[
[
"Even modules, which are objects and instances of `module` are subclasses of `object`:",
"_____no_output_____"
]
],
[
[
"import math",
"_____no_output_____"
],
[
"type(math)",
"_____no_output_____"
]
],
[
[
"So the `math` module is an instance of the `module` type:",
"_____no_output_____"
]
],
[
[
"ty = type(math)",
"_____no_output_____"
],
[
"type(ty)",
"_____no_output_____"
],
[
"issubclass(ty, object)",
"_____no_output_____"
]
],
[
[
"If you're wondering where the `module` type (class) lives, you can get a reference to it the way I did here, or you can look for it in the `types` module where you can it and the other built-in types.",
"_____no_output_____"
]
],
[
[
"import types",
"_____no_output_____"
],
[
"dir(types)",
"_____no_output_____"
]
],
[
[
"For example, if we define a function:",
"_____no_output_____"
]
],
[
[
"def my_func():\n pass",
"_____no_output_____"
],
[
"type(my_func)",
"_____no_output_____"
],
[
"types.FunctionType is type(my_func)",
"_____no_output_____"
]
],
[
[
"And `FunctionType` inherits from `object`:",
"_____no_output_____"
]
],
[
[
"issubclass(types.FunctionType, object)",
"_____no_output_____"
]
],
[
[
"and of course, instances of that type are therefore also instances of `object`:",
"_____no_output_____"
]
],
[
[
"isinstance(my_func, object)",
"_____no_output_____"
]
],
[
[
"as well as being instances of `FunctionType`:",
"_____no_output_____"
]
],
[
[
"isinstance(my_func, types.FunctionType)",
"_____no_output_____"
]
],
[
[
"The `object` class implements a certain amount of base functionality.\n\nWe can see some of them here:",
"_____no_output_____"
]
],
[
[
"dir(object)",
"_____no_output_____"
]
],
[
[
"So as you can see `object` implements methods such as `__eq__`, `__hash__`, `__repr__` and `__str__`.",
"_____no_output_____"
],
[
"Let's investigate some of those, starting with `__repr__` and `__str__`:",
"_____no_output_____"
]
],
[
[
"o1 = object()",
"_____no_output_____"
],
[
"str(o1)",
"_____no_output_____"
],
[
"repr(o1)",
"_____no_output_____"
]
],
[
[
"You probably recognize that output! If we define our own class that does not **override** the `__repr__` or `__str__` methods, when we call those methods on instances of that class it will actually call the implementation in the `object` class:",
"_____no_output_____"
]
],
[
[
"class Person:\n pass",
"_____no_output_____"
],
[
"p = Person()\nstr(p)",
"_____no_output_____"
]
],
[
[
"So this actually called the `__str__` method in the `object` class (but it is an instance method, so it applies to our specific instance `p`).",
"_____no_output_____"
],
[
"Similarly, the `__eq__` method in the object class is implemented, and uses the object **id** to determine equality:",
"_____no_output_____"
]
],
[
[
"o1 = object()\no2 = object()",
"_____no_output_____"
],
[
"id(o1), id(o2)",
"_____no_output_____"
],
[
"o1 is o2, o1 == o2, o1 is o1, o1 == o1",
"_____no_output_____"
]
],
[
[
"So we can use the `==` operator with our custom classes even if we did not implement `__eq__` explicitly - because it inherits it from the `object` class. \n\nAnd so we have the same functionality - our custom objects will compare equal only if they are the same object (id):",
"_____no_output_____"
]
],
[
[
"p1 = Person()\np2 = Person()\n\np1 is p2, p1 == p2, p1 is p1, p1 == p1",
"_____no_output_____"
]
],
[
[
"We can actually see what specific method is being called by looking at the id of the method in our object, and in the object class:",
"_____no_output_____"
]
],
[
[
"id(Person.__eq__)",
"_____no_output_____"
],
[
"id(object.__eq__)",
"_____no_output_____"
]
],
[
[
"See? Same method!",
"_____no_output_____"
],
[
"In the same way, we can write classes that do not have `__init__` or `__new__` methods - because they just inherit it from `object`:",
"_____no_output_____"
]
],
[
[
"id(Person.__init__), id(object.__init__)",
"_____no_output_____"
]
],
[
[
"But of course, if we override those methods, then the `object` methods will not be used:",
"_____no_output_____"
]
],
[
[
"class Person:\n def __init__(self):\n pass",
"_____no_output_____"
],
[
"id(Person.__init__), id(object.__init__)",
"_____no_output_____"
]
],
[
[
"Different methods...",
"_____no_output_____"
],
[
"We'll look at overriding in more detail next.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e73555c9b9fa4fcad01d5f121d112d892fe1440e | 13,098 | ipynb | Jupyter Notebook | distance_spread.ipynb | FardinAhsan146/Spreadsheet-of-distances-google-maps | b7835d0ed7a50135f19f0fd891b974a9e1e070e4 | [
"MIT"
] | null | null | null | distance_spread.ipynb | FardinAhsan146/Spreadsheet-of-distances-google-maps | b7835d0ed7a50135f19f0fd891b974a9e1e070e4 | [
"MIT"
] | null | null | null | distance_spread.ipynb | FardinAhsan146/Spreadsheet-of-distances-google-maps | b7835d0ed7a50135f19f0fd891b974a9e1e070e4 | [
"MIT"
] | null | null | null | 29.566591 | 197 | 0.402352 | [
[
[
"# Route automation ",
"_____no_output_____"
],
[
"When I'm planning a trip, I usually like knowing the distance between all the places, I will be visiting in order to plan a route and just get a general idea of how much it will cost, etc.\n\nPicking out the distance between all the locations gets annoying fast, so I wrote this script.\n\nWorks with almost all places (even if you make spelling mistakes, as google is smart) as long as your spelling isn't TOO bad, it will work as google will probably display a result.",
"_____no_output_____"
]
],
[
[
"import requests,bs4,re,pandas as pd",
"_____no_output_____"
],
[
"def google_search(start,destination):\n \n \"\"\"\n This function sends a search request to google and takes extracts out the answer from the quick answer box,\n code is written such that it works for distances between locations with the format google uses,\n as of when the code was written.\n \n Will only work if google has auto complete result for the distance, thus cities and area will work,\n but not going too complex, given string formatting then is a bloody nightmare\n \n \"\"\"\n \n question = f\"distance from {start.lower()} to {destination.lower()} in km\" \n \n\n\n url = \"https://www.google.com/search?hl=en&q=\" + question\n\n\n request_result = requests.get( url )\n\n\n soup = bs4.BeautifulSoup( request_result.text \n , \"html.parser\" )\n\n\n\n temp = soup.find( \"div\" , class_='BNeawe' ).text \n\n \n \n find_distance = lambda temp_string: ''.join(re.findall(r'([,.\\d]+)\\s*(?:km)',temp_string)) \n \n \n def find_time(temp_string):\n \n split_string = temp_string.split('\\n')\n \n \n if len(split_string) < 3:\n return ''\n else:\n relevant_part = split_string[2]\n to_ret = re.findall(r'(.*?)\\(', relevant_part)\n return ''.join(to_ret)\n \n find_route = lambda temp_string: ''.join(re.findall('(?<=via ).*$', temp_string))\n \n \n distance = find_distance(temp) if find_distance(temp) != None else ''\n \n time = find_time(temp) if find_time(temp) != None else ''\n \n route = find_route(temp) if find_route(temp) != None else ''\n \n return (distance,time,route)",
"_____no_output_____"
],
[
"#call in a file with 2 columns of locations\n\ndf = pd.read_csv('initial_file.csv')",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"#Pre Process\ndf['From '] = df['From '].replace('NJ','new jersey')\n\n\ndf = df.dropna()\ndf = df.reset_index(drop=True)\ndf = df.astype(\"string\")",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8 entries, 0 to 7\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 From 8 non-null string\n 1 To 8 non-null string\ndtypes: string(2)\nmemory usage: 256.0 bytes\n"
],
[
"#Logic\n\ndf['distance_km'] = df.apply(lambda x: google_search(x['From '], x['To'])[0], axis=1)\ndf['distance_km'] = df['distance_km'].replace(',','', regex=True).astype(float)\ndf['distance_mi'] = df['distance_km']*0.6213712\n\ndf['time'] = df.apply(lambda x: google_search(x['From '], x['To'])[1], axis=1)\ndf['via'] = df.apply(lambda x: google_search(x['From '], x['To'])[2], axis=1)\n\ndf=df.reindex(columns= ['From ', 'To', 'distance_km', 'distance_mi', 'time', 'via'])",
"_____no_output_____"
],
[
"#Final result\n\ndf",
"_____no_output_____"
],
[
"#To save work after running code\n\ndf.to_excel(\"final_distances.xlsx\", index = False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e73557ef62d43d299004952b72b3e8763e1ba8a6 | 134,500 | ipynb | Jupyter Notebook | doc/A_step-by-step_basic_example_python.ipynb | tkoyama010/getfem_presentation | f9e8a9adec6b2289ff486bb9d846f7f8bab1a779 | [
"CC0-1.0"
] | 3 | 2019-04-16T21:15:02.000Z | 2019-05-21T11:31:35.000Z | doc/A_step-by-step_basic_example_python.ipynb | tkoyama010/getfem_presentation | f9e8a9adec6b2289ff486bb9d846f7f8bab1a779 | [
"CC0-1.0"
] | null | null | null | doc/A_step-by-step_basic_example_python.ipynb | tkoyama010/getfem_presentation | f9e8a9adec6b2289ff486bb9d846f7f8bab1a779 | [
"CC0-1.0"
] | null | null | null | 327.250608 | 65,638 | 0.931428 | [
[
[
"from getfem import Mesh, MeshFem, Fem, MeshIm, Integ, Model\nfrom numpy import arange",
"_____no_output_____"
],
[
"m = Mesh('cartesian', arange(0,1.1,0.1), arange(0,1.1,0.1))",
"_____no_output_____"
],
[
"mf = MeshFem(m, 1)",
"_____no_output_____"
],
[
"mf.set_fem(Fem('FEM_QK(2,2)'))",
"_____no_output_____"
],
[
"print Fem('FEM_QK(2,2)').poly_str()",
"('1 - 3*x - 3*y + 2*x^2 + 9*x*y + 2*y^2 - 6*x^2*y - 6*x*y^2 + 4*x^2*y^2', '4*x - 4*x^2 - 12*x*y + 12*x^2*y + 8*x*y^2 - 8*x^2*y^2', '-x + 2*x^2 + 3*x*y - 6*x^2*y - 2*x*y^2 + 4*x^2*y^2', '4*y - 12*x*y - 4*y^2 + 8*x^2*y + 12*x*y^2 - 8*x^2*y^2', '16*x*y - 16*x^2*y - 16*x*y^2 + 16*x^2*y^2', '-4*x*y + 8*x^2*y + 4*x*y^2 - 8*x^2*y^2', '-y + 3*x*y + 2*y^2 - 2*x^2*y - 6*x*y^2 + 4*x^2*y^2', '-4*x*y + 4*x^2*y + 8*x*y^2 - 8*x^2*y^2', 'x*y - 2*x^2*y - 2*x*y^2 + 4*x^2*y^2')\n"
],
[
"mim = MeshIm(m, Integ('IM_EXACT_PARALLELEPIPED(2)'))",
"_____no_output_____"
],
[
"border = m.outer_faces()",
"_____no_output_____"
],
[
"m.set_region(42, border)",
"_____no_output_____"
],
[
"md = Model('real')",
"_____no_output_____"
],
[
"md.add_fem_variable('u', mf)",
"_____no_output_____"
],
[
"md.add_Laplacian_brick(mim, 'u');",
"_____no_output_____"
],
[
"g = mf.eval('x*(x-1) - y*(y-1)')\nmd.add_initialized_fem_data('DirichletData', mf, g)\nmd.add_Dirichlet_condition_with_multipliers(mim, 'u', mf, 42, 'DirichletData')",
"_____no_output_____"
],
[
"md.solve()",
"_____no_output_____"
],
[
"u = md.variable('u')",
"_____no_output_____"
],
[
"mf.export_to_pos('u.pos',u,'Computed solution')",
"_____no_output_____"
],
[
"%%writefile gscript\nPrint \"A_step-by-step_basic_example_python_image1.png\";\nExit;",
"Writing gscript\n"
],
[
"!cat gscript",
"Print \"A_step-by-step_basic_example_python_image1.png\";\r\nExit;"
],
[
"!gmsh u.pos gscript",
"_____no_output_____"
],
[
"from IPython.core.display import Image",
"_____no_output_____"
],
[
"Image('A_step-by-step_basic_example_python_image1.png')",
"_____no_output_____"
],
[
"f = mf.eval('5')\nmd.add_initialized_fem_data('VolumicData', mf, f)\nmd.add_source_term_brick(mim, 'u', 'VolumicData')",
"_____no_output_____"
],
[
"md.solve()\nu = md.variable('u')\nmf.export_to_pos('u.pos',u,'Computed solution')",
"_____no_output_____"
],
[
"%%writefile gscript\nPrint \"A_step-by-step_basic_example_python_image2.png\";\nExit;",
"Overwriting gscript\n"
],
[
"!gmsh u.pos gscript",
"_____no_output_____"
],
[
"Image('A_step-by-step_basic_example_python_image2.png')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e73567aae4475dc31e2e698a1d89a3e5083e23be | 12,552 | ipynb | Jupyter Notebook | Ch5/04_NER_using_spaCy - CoNLL.ipynb | quicksilverTrx/practical-nlp | bb372ec32885a7b25da7857434e1c9f7a100a94e | [
"MIT"
] | 5 | 2020-12-02T23:17:57.000Z | 2021-04-14T01:17:09.000Z | Ch5/04_NER_using_spaCy - CoNLL.ipynb | quicksilverTrx/practical-nlp | bb372ec32885a7b25da7857434e1c9f7a100a94e | [
"MIT"
] | 5 | 2021-08-23T20:56:47.000Z | 2022-02-10T04:38:21.000Z | Ch5/04_NER_using_spaCy - CoNLL.ipynb | quicksilverTrx/practical-nlp | bb372ec32885a7b25da7857434e1c9f7a100a94e | [
"MIT"
] | 3 | 2020-12-02T23:42:01.000Z | 2021-03-03T23:04:00.000Z | 37.246291 | 173 | 0.453474 | [
[
[
"# Training and Evaluating an NER model with spaCy on the CoNLL dataset\n\nIn this notebook, we will take a look at using spaCy commandline to train and evaluate a NER model. We will also compare it with the pretrained NER model in spacy. \n\nNote: we will create multiple folders during this experiment:\nspacyNER_data ",
"_____no_output_____"
],
[
"## Step 1: Converting data to json structures so it can be used by Spacy",
"_____no_output_____"
]
],
[
[
"#Read the CONLL data from conll2003 folder, and store the formatted data into a folder spacyNER_data\n!mkdir spacyNER_data\n#the above two lines create folders if they don't exist. If they do, the output shows a message that it\n#already exists and cannot be created again\n!python3 -m spacy convert \"Data/conll2003/en/train.txt\" spacyNER_data -c ner\n!python3 -m spacy convert \"Data/conll2003/en/test.txt\" spacyNER_data -c ner\n!python3 -m spacy convert \"Data/conll2003/en/valid.txt\" spacyNER_data -c ner",
"mkdir: cannot create directory ‘spacyNER_data’: File exists\n\u001b[38;5;2m✔ Generated output file (1 documents)\u001b[0m\nspacyNER_data/train.json\n\u001b[38;5;2m✔ Generated output file (1 documents)\u001b[0m\nspacyNER_data/test.json\n\u001b[38;5;2m✔ Generated output file (1 documents)\u001b[0m\nspacyNER_data/valid.json\n"
]
],
[
[
"#### For example, the data before and after running spacy's convert program looks as follows.",
"_____no_output_____"
]
],
[
[
"!echo \"BEFORE : (Data/conll2003/en/train.txt)\"\n!head \"Data/conll2003/en/train.txt\" -n 11 | tail -n 9\n!echo \"\\nAFTER : (Data/conll2003/en/train.json)\"\n!head \"spacyNER_data/train.json\" -n 64 | tail -n 49",
"BEFORE : (Data/conll2003/en/train.txt)\nEU NNP B-NP B-ORG\nrejects VBZ B-VP O\nGerman JJ B-NP B-MISC\ncall NN I-NP O\nto TO B-VP O\nboycott VB I-VP O\nBritish JJ B-NP B-MISC\nlamb NN I-NP O\n. . O O\n\nAFTER : (Data/conll2003/en/train.json)\n {\n \"tokens\":[\n {\n \"orth\":\"EU\",\n \"tag\":\"NNP\",\n \"ner\":\"U-ORG\"\n },\n {\n \"orth\":\"rejects\",\n \"tag\":\"VBZ\",\n \"ner\":\"O\"\n },\n {\n \"orth\":\"German\",\n \"tag\":\"JJ\",\n \"ner\":\"U-MISC\"\n },\n {\n \"orth\":\"call\",\n \"tag\":\"NN\",\n \"ner\":\"O\"\n },\n {\n \"orth\":\"to\",\n \"tag\":\"TO\",\n \"ner\":\"O\"\n },\n {\n \"orth\":\"boycott\",\n \"tag\":\"VB\",\n \"ner\":\"O\"\n },\n {\n \"orth\":\"British\",\n \"tag\":\"JJ\",\n \"ner\":\"U-MISC\"\n },\n {\n \"orth\":\"lamb\",\n \"tag\":\"NN\",\n \"ner\":\"O\"\n },\n {\n \"orth\":\".\",\n \"tag\":\".\",\n \"ner\":\"O\"\n }\n ]\n },\n"
]
],
[
[
"## Training the NER model with Spacy (CLI)\n\nAll the commandline options can be seen at: https://spacy.io/api/cli#train\nWe are training using the train program in spacy, for English (en), and the results are stored in a folder \ncalled \"model\" (created while training). Our training file is in \"spacyNER_data/train.json\" and the validation file is at: \"spacyNER_data/valid.json\". \n\n-G stands for gpu option.\n-p stands for pipeline, and it should be followed by a comma separated set of options - in this case, a tagger and an NER are being trained simultaneously",
"_____no_output_____"
]
],
[
[
"!python3 -m spacy train en model spacyNER_data/train.json spacyNER_data/valid.json -G -p tagger,ner",
"Training pipeline: ['tagger', 'ner']\nStarting with blank model 'en'\nCounting training words (limit=0)\n\nItn Dep Loss NER Loss UAS NER P NER R NER F Tag % Token % CPU WPS GPU WPS\n--- ---------- ---------- ------- ------- ------- ------- ------- ------- ------- -------\n 0 0.000 20994.512 0.000 78.404 77.230 77.813 94.075 100.000 15468 0\n 1 0.000 10338.546 0.000 84.808 84.366 84.586 94.812 100.000 15833 0\n 2 0.000 7414.531 0.000 86.235 85.931 86.083 95.015 100.000 15839 0\n 3 0.000 5461.594 0.000 87.020 86.873 86.946 95.106 100.000 15737 0\n 4 0.000 4101.375 0.000 87.669 87.344 87.506 95.182 100.000 15887 0\n 5 0.000 3413.915 0.000 87.622 87.327 87.475 95.258 100.000 15919 0\n 6 0.000 3008.749 0.000 88.024 87.580 87.802 95.322 100.000 18794 0\n 7 0.000 2704.280 0.000 88.323 87.832 88.077 95.347 100.000 15652 0\n 8 0.000 2301.952 0.000 88.195 87.883 88.038 95.405 100.000 15935 0\n 9 0.000 2162.503 0.000 88.227 88.034 88.131 95.428 100.000 15866 0\n 10 0.000 1954.655 0.000 88.394 88.186 88.290 95.409 100.000 15689 0\n 11 0.000 1846.583 0.000 88.233 88.085 88.159 95.391 100.000 15812 0\n 12 0.000 1760.181 0.000 88.682 88.354 88.518 95.452 100.000 15829 0\n 13 0.000 1670.751 0.000 88.579 88.236 88.407 95.465 100.000 15689 0\n 14 0.000 1534.231 0.000 88.443 88.219 88.331 95.481 100.000 15662 0\n 15 0.000 1439.400 0.000 88.782 88.438 88.610 95.510 100.000 15864 0\n 16 0.000 1407.665 0.000 88.915 88.556 88.735 95.477 100.000 15872 0\n 17 0.000 1199.285 0.000 88.709 88.455 88.582 95.512 100.000 15826 0\n 18 0.000 1302.530 0.000 88.709 88.455 88.582 95.512 100.000 15776 0\n 19 0.000 1147.754 0.000 88.874 88.455 88.664 95.519 100.000 19138 0\n 20 0.000 1115.887 0.000 88.987 88.388 88.686 95.519 100.000 19035 0\n 21 0.000 1146.815 0.000 89.006 88.421 88.713 95.531 100.000 15839 0\n 22 0.000 1143.363 0.000 89.122 88.522 88.821 95.529 100.000 15981 0\n 23 0.000 1051.906 0.000 89.171 88.556 88.863 95.550 100.000 15931 0\n 24 0.000 922.404 0.000 89.124 88.674 88.898 95.550 100.000 16115 0\n 25 0.000 1033.210 0.000 89.013 88.758 88.885 95.527 100.000 15973 0\n 26 0.000 939.757 0.000 88.962 88.708 88.835 95.539 100.000 15939 0\n 27 0.000 874.334 0.000 88.808 88.539 88.674 95.521 100.000 15963 0\n 28 0.000 847.320 0.000 88.870 88.691 88.780 95.541 100.000 15855 0\n 29 0.000 879.595 0.000 88.763 88.674 88.719 95.564 100.000 15893 0\n\u001b[38;5;2m✔ Saved model to output directory\u001b[0m\nmodel/model-final\n\u001b[2K\u001b[38;5;2m✔ Created best model\u001b[0m\nmodel/model-best\n"
]
],
[
[
"Notice how the performance improves with each iteration!\n## Evaluating the model with test data set (`spacyNER_data/test.json`)",
"_____no_output_____"
],
[
"### On Trained model (`model/model-best`)",
"_____no_output_____"
]
],
[
[
"#create a folder to store the output and visualizations. \n!mkdir result\n!python3 -m spacy evaluate model/model-best spacyNER_data/test.json -dp result\n# !python -m spacy evaluate model/model-final data/test.txt.json -dp result",
"\u001b[1m\n================================== Results ==================================\u001b[0m\n\nTime 3.53 s\nWords 46666 \nWords/s 13234 \nTOK 100.00\nPOS 94.79 \nUAS 0.00 \nLAS 0.00 \nNER P 78.09 \nNER R 78.75 \nNER F 78.42 \n\n\u001b[38;5;2m✔ Generated 25 parses as HTML\u001b[0m\nresult\n"
]
],
[
[
"a Visualization of the entity tagged test data can be seen in result/entities.html folder. ",
"_____no_output_____"
],
[
"### On spacy's Pretrained NER model (`en`)",
"_____no_output_____"
]
],
[
[
"!mkdir pretrained_result\n!python3 -m spacy evaluate en spacyNER_data/test.json -dp pretrained_result",
"\u001b[1m\n================================== Results ==================================\u001b[0m\n\nTime 6.52 s\nWords 46666 \nWords/s 7160 \nTOK 100.00\nPOS 86.84 \nUAS 0.00 \nLAS 0.00 \nNER P 7.97 \nNER R 10.68 \nNER F 9.12 \n\n\u001b[38;5;2m✔ Generated 25 parses as HTML\u001b[0m\npretrained_result\n"
]
],
[
[
"a Visualization of the entity tagged test data can be seen in pretrained_result/entities.html folder. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7356dd7f3fa0edc91e13a2586841ea14eef4c4a | 4,553 | ipynb | Jupyter Notebook | 2020/01/code.ipynb | UncleCJ/advent-of-code | 38e115b1dc5111c684657a33fa4d0d4678f3aae2 | [
"CC0-1.0"
] | 6 | 2020-12-07T16:47:13.000Z | 2021-12-06T16:59:56.000Z | 2020/01/code.ipynb | UncleCJ/advent-of-code | 38e115b1dc5111c684657a33fa4d0d4678f3aae2 | [
"CC0-1.0"
] | null | null | null | 2020/01/code.ipynb | UncleCJ/advent-of-code | 38e115b1dc5111c684657a33fa4d0d4678f3aae2 | [
"CC0-1.0"
] | 1 | 2021-12-02T01:31:36.000Z | 2021-12-02T01:31:36.000Z | 4,553 | 4,553 | 0.700637 | [
[
[
"# Day 1: Report Repair\n\n[*Advent of Code day 1 - 2020-12-01*](https://adventofcode.com/2020/day/1) and [*solution megathread*](https://www.reddit.com/r/adventofcode/comments/k4e4lm/2020_day_1_solutions/)\n\n[](https://mybinder.org/v2/gh/UncleCJ/advent-of-code/master?filepath=day-01%2Fday-01.ipynb)\n\nAfter saving Christmas [five years in a row](https://adventofcode.com/events), you've decided to take a vacation at a nice resort on a tropical island. Surely, Christmas will go on without you.\n\nThe tropical island has its own currency and is entirely cash-only. The gold coins used there have a little picture of a starfish; the locals just call them *stars*. None of the currency exchanges seem to have heard of them, but somehow, you'll need to find fifty of these coins by the time you arrive so you can pay the deposit on your room.\n\nTo save your vacation, you need to get all *fifty stars* by December 25th.\n\nCollect stars by solving puzzles. Two puzzles will be made available on each day in the Advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants *one star*. Good luck!",
"_____no_output_____"
],
[
"## Part 1\n\nBefore you leave, the Elves in accounting just need you to fix your *expense report* (your puzzle input); apparently, something isn't quite adding up.\n\nSpecifically, they need you to *find the two entries that sum to 2020* and then multiply those two numbers together.\n\nFor example, suppose your expense report contained the following:\n\n```\n1721\n979\n366\n299\n675\n1456\n```\n\nIn this list, the two entries that sum to `2020` are `1721` and `299`. Multiplying them together produces `1721 * 299 = 514579`, so the correct answer is *`514579`*.\n\nOf course, your expense report is much larger. *Find the two entries that sum to `2020`; what do you get if you multiply them together?*\n\nTo begin, [get your puzzle input](https://adventofcode.com/2020/day/1/input)",
"_____no_output_____"
]
],
[
[
"# Initialize - from https://www.techcoil.com/blog/how-to-download-a-file-via-http-post-and-http-get-with-python-3-requests-library/\n# I'm fairly sure there is some error here, but leaving it until I need to or can fix it\nimport os\nimport requests\n\nif os.path.isfile('./input.txt'):\n print(\"-- Already have input, skipping download...\")\nelse:\n print(\"-- No input.txt, attempting to download\")\n response = requests.get('https://adventofcode.com/2020/day/1/input')\n if response.status_code == 200:\n with open('input.txt', 'a') as local_file:\n print(\"writing input.txt\")\n local_file.write(response.content)\n\nwith open('input.txt', 'r') as inp:\n inputdata = [line.strip() for line in inp.readlines()]",
"-- Already have input, skipping download...\n"
],
[
"answer = 0\n\n# write your solution here - 'inputdata' is a list of the lines",
"_____no_output_____"
]
],
[
[
"## Part 2\n\nThe Elves in accounting are thankful for your help; one of them even offers you a starfish coin they had left over from a past vacation. They offer you a second one if you can find *three* numbers in your expense report that meet the same criteria.\n\nUsing the above example again, the three entries that sum to `2020` are `979`, `366`, and `675`. Multiplying them together produces the answer, *`241861950`*.\n\nIn your expense report, *what is the product of the three entries that sum to `2020`?*",
"_____no_output_____"
]
],
[
[
"answer = 0\n\n# write your solution here - 'inputdata' is a list of the lines",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7357283fa362e063a2d87c88a0e0f1727332678 | 172,668 | ipynb | Jupyter Notebook | Decission Tree.ipynb | Yogi7789/Machine-Learning-Assignment-Pratical | 0313ef3b02b68fddf216c206f87ff4991ea1fab3 | [
"Apache-2.0"
] | null | null | null | Decission Tree.ipynb | Yogi7789/Machine-Learning-Assignment-Pratical | 0313ef3b02b68fddf216c206f87ff4991ea1fab3 | [
"Apache-2.0"
] | null | null | null | Decission Tree.ipynb | Yogi7789/Machine-Learning-Assignment-Pratical | 0313ef3b02b68fddf216c206f87ff4991ea1fab3 | [
"Apache-2.0"
] | null | null | null | 89.418954 | 118,448 | 0.767432 | [
[
[
"import warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pickle\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nfrom sklearn.model_selection import train_test_split,GridSearchCV\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score, roc_auc_score\nfrom six import StringIO ",
"_____no_output_____"
],
[
"df = pd.read_csv(\"https://raw.githubusercontent.com/BigDataGal/Python-for-Data-Science/master/titanic-train.csv\")",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.isnull().sum()",
"_____no_output_____"
],
[
"data = df[[\"Survived\",\"Pclass\",\"Sex\",\"Age\",\"SibSp\",\"Parch\",\"Fare\"]]",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data.isnull().sum()",
"_____no_output_____"
],
[
"data[\"Sex\"].replace({\"male\":0,\"female\":1},inplace = True)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
],
[
"data.dtypes",
"_____no_output_____"
],
[
"data.loc[data['Age'].isnull(),'Age']=np.mean(data['Age'])",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
],
[
"data.isna().sum()",
"_____no_output_____"
],
[
"data.dtypes",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"import seaborn as sns\nplt.figure(figsize=(20,25), facecolor='white')\nplotnumber = 1\n\nfor column in data:\n if plotnumber<=15 :\n ax = plt.subplot(4,4,plotnumber)\n sns.distplot(data[column])\n plt.xlabel(column,fontsize=20)\n plotnumber+=1\nplt.tight_layout()",
"_____no_output_____"
],
[
"X = data.drop(columns=\"Survived\")\nY = data[\"Survived\"]",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
],
[
"Y",
"_____no_output_____"
],
[
"x_train,x_test,y_train,y_test = train_test_split(X,Y,test_size = 0.30, random_state = 355)",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier",
"_____no_output_____"
],
[
"clf = DecisionTreeClassifier()",
"_____no_output_____"
],
[
"clf.fit(x_train,y_train)",
"_____no_output_____"
],
[
"feature_name = list(X.columns)\nclass_name = list(Y.unique())\nfeature_name",
"_____no_output_____"
],
[
"from IPython.display import Image ",
"_____no_output_____"
]
],
[
[
"## create dot_file which store the tree structure",
"_____no_output_____"
]
],
[
[
"clf.score(x_train,y_train)",
"_____no_output_____"
],
[
"py_prediction = clf.predict(x_test)",
"_____no_output_____"
],
[
"py_prediction",
"_____no_output_____"
],
[
"clf.score(x_test,y_test)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"scalar = StandardScaler()\nx_transfrom = scalar.fit_transform(X)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"grid_param = {\n 'criterion': ['gini', 'entropy'],\n 'max_depth' : range(2,100,1),\n 'min_samples_leaf' : range(1,10,1),\n 'min_samples_split': range(2,10,1),\n 'splitter' : ['best', 'random']\n \n}",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split,GridSearchCV",
"_____no_output_____"
],
[
"grid_search = GridSearchCV(estimator=clf,param_grid=grid_param,cv=5,n_jobs =-1)",
"_____no_output_____"
],
[
"grid_search.fit(x_train,y_train)",
"_____no_output_____"
],
[
"grid_search.best_estimator_",
"_____no_output_____"
],
[
"grid_search.best_score_",
"_____no_output_____"
],
[
"pred = grid_search.predict(x_test)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score, confusion_matrix, roc_curve, roc_auc_score",
"_____no_output_____"
],
[
"accuracy_score(y_test,pred)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7357407bde39959680d905b9220ab840298abd3 | 8,824 | ipynb | Jupyter Notebook | notebooks/ProcessLocalization.ipynb | XavierCHN/dota-tutorial | eefc8356b84a993149a26201834e75e51af98d44 | [
"MIT"
] | null | null | null | notebooks/ProcessLocalization.ipynb | XavierCHN/dota-tutorial | eefc8356b84a993149a26201834e75e51af98d44 | [
"MIT"
] | null | null | null | notebooks/ProcessLocalization.ipynb | XavierCHN/dota-tutorial | eefc8356b84a993149a26201834e75e51af98d44 | [
"MIT"
] | null | null | null | 38.199134 | 115 | 0.608681 | [
[
[
"import pandas as pd\nfrom pathlib import Path",
"_____no_output_____"
],
[
"sub_folder = \"7\"\n\nresults_path = Path(\"results_translations\") / sub_folder # Directory where we will store all the results\nresults_path.mkdir(exist_ok=True, parents=True)\n\ntranslations_base_path = Path(\"translations\") / sub_folder # Path to directory with the translations\ntranslation_codes = {\n \"de\": \"german\",\n \"el\": \"greek\",\n \"es-ES\": \"spanish\",\n \"fi\": \"finnish\",\n \"pt-BR\": \"brazilian\",\n \"ru\": \"russian\",\n \"tr\": \"turkish\",\n \"zh-CN\": \"schinese\",\n \"fr\": \"french\",\n \"pl\": \"polish\",\n \"it\": \"italian\",\n}",
"_____no_output_____"
],
[
"for code, lang_name in translation_codes.items():\n df = pd.read_csv(translations_base_path / code / \"addon_english.csv\", encoding=\"utf-8\")\n\n with open(results_path / f\"addon_{lang_name}.txt\", \"w\", encoding=\"utf-8\") as out_file:\n out_file.write('\"lang\"\\n')\n out_file.write('{\\n')\n out_file.write(f'\"Language\" \"{lang_name}\"\\n')\n out_file.write('\"Tokens\"\\n')\n out_file.write('{\\n')\n for row in df[[\"String ID\", \"Source text\", \"Translation\"]].to_numpy():\n original = row[1].replace('\"', \"'\")\n translation = row[2].replace('\"', \"'\")\n out_file.write(f'\"{row[0]}\" \"{translation}\"\\n')\n out_file.write(f'\"[english]{row[0]}\" \"{original}\"\\n')\n if '\"' in row[1] or '\"' in row[2]:\n print(\"Quote in row\", code, row[0])\n out_file.write('}\\n')\n out_file.write('}\\n')",
"Quote in row de Error_Tower_1\nQuote in row de Goal_1_Leveling_1\nQuote in row de Goal_1_Leveling_2\nQuote in row de Goal_1_Leveling_3\nQuote in row de Goal_1_BreatheFire_1\nQuote in row de Script_1_Opening_5\nQuote in row de Script_1_Movement_11\nQuote in row de Script_1_Leveling_1\nQuote in row de Script_1_Leveling_5\nQuote in row de Script_1_Leveling_9\nQuote in row de Script_1_Leveling_10\nQuote in row de Script_2_Creeps_11\nQuote in row de Script_2_Courier_9\nQuote in row de Script_3_Neutrals_6\nQuote in row de Script_4_Communication_5\nQuote in row de Script_4_Communication_6\nQuote in row de Script_6_DFZ\nQuote in row de Script_6_Yodi\nQuote in row de Script_6_Flam3s\nQuote in row de Script_6_valkyrjaRuby\nQuote in row de MessageToTheNoobs_31\nQuote in row de MessageToTheNoobs_77\nQuote in row el Goal_4_Communication_1\nQuote in row el Goal_4_Communication_2\nQuote in row el Script_1_Movement_11\nQuote in row el Script_3_Opening_12\nQuote in row el Script_3_Neutrals_3\nQuote in row el Script_3_Neutrals_8\nQuote in row el Script_4_Communication_3\nQuote in row el Script_4_Communication_5\nQuote in row el Script_4_Communication_6\nQuote in row el Script_5_5v5_11\nQuote in row el MessageToTheNoobs_19\nQuote in row el MessageToTheNoobs_77\nQuote in row es-ES Script_3_Neutrals_8\nQuote in row es-ES Script_4_Communication_5\nQuote in row es-ES Script_4_Communication_6\nQuote in row es-ES Script_4_Communication_13\nQuote in row es-ES Script_5_5v5_11\nQuote in row es-ES MessageToTheNoobs_77\nQuote in row pt-BR Script_2_Courier_9\nQuote in row pt-BR Script_3_Opening_17\nQuote in row pt-BR Script_3_Neutrals_8\nQuote in row pt-BR Script_4_Communication_6\nQuote in row pt-BR Script_4_Communication_13\nQuote in row pt-BR Script_6_Yodi\nQuote in row pt-BR Script_6_SUNSfan\nQuote in row tr Script_4_Communication_6\nQuote in row zh-CN Script_1_Movement_1\nQuote in row zh-CN Script_1_Leveling_10\nQuote in row fr Error_Courier_1\nQuote in row fr Error_Tower_1\nQuote in row fr Error_Chapter3_2\nQuote in row fr Error_Teamfight_1\nQuote in row fr Goal_1_Leveling_1\nQuote in row fr Goal_1_Leveling_2\nQuote in row fr Goal_1_Leveling_3\nQuote in row fr Goal_1_BreatheFire_1\nQuote in row fr Goal_2_Creeps_2\nQuote in row fr Goal_2_Tower_4\nQuote in row fr Goal_2_Courier_3\nQuote in row fr Goal_2_Courier_4\nQuote in row fr Goal_3_4\nQuote in row fr Goal_3_11\nQuote in row fr Goal_3_15\nQuote in row fr Goal_4_Opening_3\nQuote in row fr Goal_4_Wards_2\nQuote in row fr Goal_4_Wards_3\nQuote in row fr Goal_4_Wards_5\nQuote in row fr Goal_4_Outpost_1\nQuote in row fr Goal_4_Communication_1\nQuote in row fr Goal_5_5v5_3\nQuote in row fr Goal_5_5v5_4\nQuote in row fr Script_1_Movement_11\nQuote in row fr Script_1_Leveling_1\nQuote in row fr Script_1_Leveling_2\nQuote in row fr Script_1_Leveling_5\nQuote in row fr Script_1_Leveling_9\nQuote in row fr Script_1_Leveling_10\nQuote in row fr Script_1_BreatheFire_2\nQuote in row fr Script_1_Shop_7\nQuote in row fr Script_2_Creeps_2\nQuote in row fr Script_2_Creeps_11\nQuote in row fr Script_2_Creeps_20\nQuote in row fr Script_2_Courier_4\nQuote in row fr Script_2_Courier_5\nQuote in row fr Script_2_Courier_9\nQuote in row fr Script_3_Opening_3\nQuote in row fr Script_3_Opening_12\nQuote in row fr Script_3_Opening_17\nQuote in row fr Script_3_Neutrals_8\nQuote in row fr Script_4_Wards_12\nQuote in row fr Script_4_Wards_15\nQuote in row fr Script_4_Outpost_1\nQuote in row fr Script_4_Communication_3\nQuote in row fr Script_4_Communication_5\nQuote in row fr Script_4_Communication_6\nQuote in row fr Script_4_Communication_13\nQuote in row fr Script_5_5v5_6\nQuote in row fr Script_5_5v5_7\nQuote in row fr Script_5_5v5_11\nQuote in row fr Script_6_Closing_6\nQuote in row fr Script_6_Purge\nQuote in row fr Script_6_Yodi\nQuote in row fr Script_6_SUNSfan\nQuote in row fr Script_6_Alex\nQuote in row fr Script_6_Flam3s\nQuote in row fr Script_6_SinZ\nQuote in row fr Script_6_SmashTheState\nQuote in row fr MessageToTheNoobs_0\nQuote in row fr MessageToTheNoobs_15\nQuote in row fr MessageToTheNoobs_33\nQuote in row fr MessageToTheNoobs_62\nQuote in row fr MessageToTheNoobs_63\nQuote in row fr MessageToTheNoobs_77\nQuote in row pl Goal_4_Communication_2\nQuote in row pl Script_2_Creeps_20\nQuote in row pl Script_4_Wards_15\nQuote in row pl Script_4_Communication_6\nQuote in row it Goal_2_Tower_8\nQuote in row it Goal_2_Tower_10\nQuote in row it Script_1_Movement_11\nQuote in row it Script_4_Communication_13\nQuote in row it MessageToTheNoobs_65\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e73575122a8c2cac008c80f6dbd1ccf3ceec54c1 | 31,321 | ipynb | Jupyter Notebook | pytorchIntro/Tensores.ipynb | HectorFranc/deep-learning-with-Pytorch | 7989e44cebb85f720bee2e4499139e8c992a7cd9 | [
"MIT"
] | null | null | null | pytorchIntro/Tensores.ipynb | HectorFranc/deep-learning-with-Pytorch | 7989e44cebb85f720bee2e4499139e8c992a7cd9 | [
"MIT"
] | null | null | null | pytorchIntro/Tensores.ipynb | HectorFranc/deep-learning-with-Pytorch | 7989e44cebb85f720bee2e4499139e8c992a7cd9 | [
"MIT"
] | null | null | null | 28.551504 | 483 | 0.430989 | [
[
[
"import torch",
"_____no_output_____"
]
],
[
[
"# Tensores\n---",
"_____no_output_____"
]
],
[
[
"# Tensor con dimensiones dadas de números aleatorios\nA = torch.randn((8, 3, 5))",
"_____no_output_____"
],
[
"# Tamaño de un tensor\nA.size()",
"_____no_output_____"
],
[
"# Tensor.size() funciona como una tupla\nA.size() == (8, 3, 5)",
"_____no_output_____"
],
[
"# Los tensores soportan slicing\nA[0, :, 0]",
"_____no_output_____"
],
[
"# torch.zeros devuelve un tensor de la forma especificado con puros ceros\nC = torch.zeros((5, 5))\nC.dtype",
"_____no_output_____"
],
[
"# torch.randint recibe opcional un minimo, un maximo y un size para devolver un\n# tensor con valores enteros aleatorios en el rango\nD = torch.randint(2, (5, 5))\nD",
"_____no_output_____"
],
[
"C.dtype, D.dtype",
"_____no_output_____"
],
[
"# Tensor.float() convierte a floats los valores del tensor\nC + D == C + D.float()",
"_____no_output_____"
],
[
"# Configurar el tipo por defecto de los tensores creados a flotantes\ntorch.set_default_tensor_type('torch.FloatTensor')",
"_____no_output_____"
],
[
"# En qué dispositivo está el tensor: CPU/GPU\nC.device",
"_____no_output_____"
],
[
"# ¿Está disponible la GPU?\ntorch.cuda.is_available()",
"_____no_output_____"
],
[
"# Dispositivo a usar\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')",
"_____no_output_____"
],
[
"device",
"_____no_output_____"
],
[
"x = torch.randn((3, 5)) # CPU\n\n# torch.ones_like(Tensor) crea un tensor con puros unos de la misma forma \n# que el tensor pasado\ny = torch.ones_like(x, device=device) # GPU",
"_____no_output_____"
],
[
"# Error porque no están en el mismo dispositivo\nx + y",
"_____no_output_____"
],
[
"# Para poder hacer operaciones convertimos al mismo dispositivo con to()\nx = x.to(device)\nz = x + y\nprint(z)",
"tensor([[-0.1272, -0.1770, 0.4930, 0.4073, 1.2342],\n [ 0.1771, 1.5688, 1.9755, 1.1847, 0.1610],\n [ 0.8900, 0.9086, 1.7225, -0.7804, -0.1015]], device='cuda:0')\n"
],
[
"print(z.cpu())",
"tensor([[-0.1272, -0.1770, 0.4930, 0.4073, 1.2342],\n [ 0.1771, 1.5688, 1.9755, 1.1847, 0.1610],\n [ 0.8900, 0.9086, 1.7225, -0.7804, -0.1015]])\n"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Datasets\n---",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/gdrive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /gdrive\n"
],
[
"# torchvision tiene datasets relacionados con imagenes\nfrom torchvision import datasets",
"_____no_output_____"
],
[
"# Datasets en torchvision\ndir(datasets)",
"_____no_output_____"
],
[
"# Descarga el dataset CIFAR10 en la ruta especificada.\n# Lo descarga porque download=True\ncifar = datasets.CIFAR10('/gdrive/My Drive/dl-pytorch/datasets', download=True)",
"Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to /gdrive/My Drive/dl-pytorch/datasets/cifar-10-python.tar.gz\n"
],
[
"len(cifar) # Numero de ejemplos",
"_____no_output_____"
],
[
"# Datos del dataset CIFAR10 en un tensor\ndata = torch.Tensor(cifar.data)",
"_____no_output_____"
],
[
"# Proporciones del dataset (imagenes son tensores de cuatro dimensiones)\ndata.size()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e735888350a3fda825328245eedefe92fa981349 | 51,651 | ipynb | Jupyter Notebook | Kaggle_Challenge_Sprint_Study_Guide2.ipynb | JimKing100/DS-Unit-2-Kaggle-Challenge | d1a705987c5a4df8b3ab74daab453754b77045cc | [
"MIT"
] | null | null | null | Kaggle_Challenge_Sprint_Study_Guide2.ipynb | JimKing100/DS-Unit-2-Kaggle-Challenge | d1a705987c5a4df8b3ab74daab453754b77045cc | [
"MIT"
] | null | null | null | Kaggle_Challenge_Sprint_Study_Guide2.ipynb | JimKing100/DS-Unit-2-Kaggle-Challenge | d1a705987c5a4df8b3ab74daab453754b77045cc | [
"MIT"
] | null | null | null | 48.002788 | 309 | 0.437784 | [
[
[
"<a href=\"https://colab.research.google.com/github/JimKing100/DS-Unit-2-Kaggle-Challenge/blob/master/Kaggle_Challenge_Sprint_Study_Guide2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import os, sys\nin_colab = 'google.colab' in sys.modules\n\n# Pull files from Github repo\nos.chdir('/content')\n!git init .\n!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git\n!git pull origin master\n \n# Install required python packages\n!pip install -r requirements.txt\n \n# Change into directory for module\nos.chdir('module3')",
"Reinitialized existing Git repository in /content/.git/\nfatal: remote origin already exists.\nFrom https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge\n * branch master -> FETCH_HEAD\nAlready up to date.\nRequirement already satisfied: category_encoders==2.0.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 1)) (2.0.0)\nRequirement already satisfied: eli5==0.10.1 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 2)) (0.10.1)\nRequirement already satisfied: matplotlib!=3.1.1 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 3)) (3.0.3)\nRequirement already satisfied: pandas-profiling==2.3.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 4)) (2.3.0)\nRequirement already satisfied: pdpbox==0.2.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 5)) (0.2.0)\nRequirement already satisfied: plotly==4.1.1 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 6)) (4.1.1)\nRequirement already satisfied: seaborn==0.9.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 7)) (0.9.0)\nRequirement already satisfied: scikit-learn==0.21.3 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 8)) (0.21.3)\nRequirement already satisfied: shap==0.30.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 9)) (0.30.0)\nRequirement already satisfied: xgboost==0.90 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 10)) (0.90)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0->-r requirements.txt (line 1)) (1.16.5)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0->-r requirements.txt (line 1)) (1.3.1)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0->-r requirements.txt (line 1)) (0.5.1)\nRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0->-r requirements.txt (line 1)) (0.10.1)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0->-r requirements.txt (line 1)) (0.24.2)\nRequirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5==0.10.1->-r requirements.txt (line 2)) (0.8.3)\nRequirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5==0.10.1->-r requirements.txt (line 2)) (2.10.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5==0.10.1->-r requirements.txt (line 2)) (1.12.0)\nRequirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5==0.10.1->-r requirements.txt (line 2)) (0.10.1)\nRequirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5==0.10.1->-r requirements.txt (line 2)) (19.1.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.1.1->-r requirements.txt (line 3)) (1.1.0)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.1.1->-r requirements.txt (line 3)) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.1.1->-r requirements.txt (line 3)) (2.5.3)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.1.1->-r requirements.txt (line 3)) (2.4.2)\nRequirement already satisfied: htmlmin>=0.1.12 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.1.12)\nRequirement already satisfied: phik>=0.9.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.9.8)\nRequirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.4.2)\nRequirement already satisfied: confuse>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.0.0)\nRequirement already satisfied: astropy in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0->-r requirements.txt (line 4)) (3.0.5)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pdpbox==0.2.0->-r requirements.txt (line 5)) (0.13.2)\nRequirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from pdpbox==0.2.0->-r requirements.txt (line 5)) (5.4.8)\nRequirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from plotly==4.1.1->-r requirements.txt (line 6)) (1.3.3)\nRequirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from shap==0.30.0->-r requirements.txt (line 9)) (0.15.0)\nRequirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from shap==0.30.0->-r requirements.txt (line 9)) (5.5.0)\nRequirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap==0.30.0->-r requirements.txt (line 9)) (4.28.1)\nRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.0.0->-r requirements.txt (line 1)) (2018.9)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5==0.10.1->-r requirements.txt (line 2)) (1.1.1)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.1.1->-r requirements.txt (line 3)) (41.2.0)\nRequirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (5.3.1)\nRequirement already satisfied: pytest>=4.0.2 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (5.1.3)\nRequirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (5.6.0)\nRequirement already satisfied: pytest-pylint>=0.13.0 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.14.1)\nRequirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.40.1)\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (3.13)\nRequirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap==0.30.0->-r requirements.txt (line 9)) (1.0.3)\nRequirement already satisfied: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap==0.30.0->-r requirements.txt (line 9)) (2.4.1)\nRequirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap==0.30.0->-r requirements.txt (line 9)) (4.3.0)\nRequirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap==0.30.0->-r requirements.txt (line 9)) (2.3)\nRequirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (4.3.2)\nRequirement already satisfied: pexpect; sys_platform != \"win32\" in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (4.7.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (2.1.3)\nRequirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (1.0.16)\nRequirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (0.8.1)\nRequirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (4.4.0)\nRequirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->shap==0.30.0->-r requirements.txt (line 9)) (0.7.5)\nRequirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (17.0.0)\nRequirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (4.5.0)\nRequirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (4.5.3)\nRequirement already satisfied: importlib-metadata>=0.12; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.23)\nRequirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.8.0)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.1.7)\nRequirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (7.2.0)\nRequirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.13.0)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (19.1)\nRequirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.3.0)\nRequirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.3)\nRequirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.4.2)\nRequirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.6.0)\nRequirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (4.4.0)\nRequirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.8.4)\nRequirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.4.2)\nRequirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (3.1.0)\nRequirement already satisfied: pylint>=1.4.5 in /usr/local/lib/python3.6/dist-packages (from pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (2.4.1)\nRequirement already satisfied: llvmlite>=0.25.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.29.0)\nRequirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->shap==0.30.0->-r requirements.txt (line 9)) (0.46)\nRequirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->shap==0.30.0->-r requirements.txt (line 9)) (0.2.0)\nRequirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != \"win32\"->ipython->shap==0.30.0->-r requirements.txt (line 9)) (0.6.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < \"3.8\"->pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.6.0)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (2.6.0)\nRequirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.5.1)\nRequirement already satisfied: astroid<3,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (2.3.0)\nRequirement already satisfied: isort<5,>=4.2.5 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (4.3.21)\nRequirement already satisfied: mccabe<0.7,>=0.6 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (0.6.1)\nRequirement already satisfied: lazy-object-proxy in /usr/local/lib/python3.6/dist-packages (from astroid<3,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.4.2)\nRequirement already satisfied: typed-ast<1.3.0; implementation_name == \"cpython\" and python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from astroid<3,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.2.0)\nRequirement already satisfied: wrapt in /usr/local/lib/python3.6/dist-packages (from astroid<3,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0->-r requirements.txt (line 4)) (1.11.2)\n"
]
],
[
[
"### Load and Split the Data - Train and Test",
"_____no_output_____"
]
],
[
[
"import category_encoders as ce\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_absolute_error\n\nimport numpy as np\nimport pandas as pd\n\n# Read New York City apartment rental listing data\ndf = pd.read_csv('../data/renthop-nyc.csv')\nassert df.shape == (49352, 34)\n\n# Remove the most extreme 1% prices,\n# the most extreme .1% latitudes, &\n# the most extreme .1% longitudes\ndf = df[(df['price'] >= np.percentile(df['price'], 0.5)) & \n (df['price'] <= np.percentile(df['price'], 99.5)) & \n (df['latitude'] >= np.percentile(df['latitude'], 0.05)) & \n (df['latitude'] < np.percentile(df['latitude'], 99.95)) &\n (df['longitude'] >= np.percentile(df['longitude'], 0.05)) & \n (df['longitude'] <= np.percentile(df['longitude'], 99.95))]\n\n# Do train/test split\n# Use data from April & May 2016 to train\n# Use data from June 2016 to test\ndf['created'] = pd.to_datetime(df['created'], infer_datetime_format=True)\ncutoff = pd.to_datetime('2016-06-01')\ntrain = df[df.created < cutoff]\ntest = df[df.created >= cutoff]\n\ntrain, val = train_test_split(train, train_size=0.80, test_size=0.20, random_state=42)\nprint(train.shape, val.shape, test.shape)\n\ntrain.head()",
"(25475, 34) (6369, 34) (16973, 34)\n"
]
],
[
[
"### Baseline",
"_____no_output_____"
]
],
[
[
"print('Baseline - Mean of Price', train['price'].mean())",
"Baseline - Mean of Price 3580.408792934249\n"
]
],
[
[
"### Engineer Features",
"_____no_output_____"
]
],
[
[
"# Wrangle train & test sets in the same way\ndef engineer_features(df):\n \n # Avoid SettingWithCopyWarning\n df = df.copy()\n \n # Does the apartment have a description?\n df['description'] = df['description'].str.strip().fillna('')\n df['has_description'] = df['description'] != ''\n\n # How long is the description?\n df['description_length'] = df['description'].str.len()\n\n # How many total perks does each apartment have?\n perk_cols = ['elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed',\n 'doorman', 'dishwasher', 'no_fee', 'laundry_in_building',\n 'fitness_center', 'pre-war', 'laundry_in_unit', 'roof_deck',\n 'outdoor_space', 'dining_room', 'high_speed_internet', 'balcony',\n 'swimming_pool', 'new_construction', 'exclusive', 'terrace', \n 'loft', 'garden_patio', 'common_outdoor_space', \n 'wheelchair_access']\n df['perk_count'] = df[perk_cols].sum(axis=1)\n\n # Are cats or dogs allowed?\n df['cats_or_dogs'] = (df['cats_allowed']==1) | (df['dogs_allowed']==1)\n\n # Are cats and dogs allowed?\n df['cats_and_dogs'] = (df['cats_allowed']==1) & (df['dogs_allowed']==1)\n\n # Total number of rooms (beds + baths)\n df['rooms'] = df['bedrooms'] + df['bathrooms']\n \n # Extract number of days elapsed in year, and drop original date feature\n df['days'] = (df['created'] - pd.to_datetime('2016-01-01')).dt.days\n \n df = df.drop(columns='created')\n df = df.drop(columns='description')\n\n return df\n \ntrain = engineer_features(train)\nval = engineer_features(val)\ntest = engineer_features(test)\n\nprint(train.shape)\ntrain.head()",
"(25475, 39)\n"
]
],
[
[
"### Train, Validate, Test - 80/20",
"_____no_output_____"
]
],
[
[
"#train, val = train_test_split(train, train_size=0.80, test_size=0.20, random_state=42)\nprint(train.shape, val.shape, test.shape)",
"(25475, 39) (6369, 39) (16973, 39)\n"
]
],
[
[
"### Cross-Validate",
"_____no_output_____"
]
],
[
[
"import category_encoders as ce\nimport numpy as np\nfrom sklearn.feature_selection import f_regression, SelectKBest\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.linear_model import Ridge\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.linear_model import LogisticRegression\n\ntarget = 'price'\nfeatures = train.columns.drop(target)\nX_train = train[features]\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]\nX_test = test\n\npipeline = make_pipeline(\n #ce.TargetEncoder(min_samples_leaf=1, smoothing=1),\n ce.OneHotEncoder(use_cat_names=True),\n SimpleImputer(strategy='median'), \n RandomForestRegressor(n_estimators=10, n_jobs=-1, random_state=42)\n)\n\nk = 3\nscores = cross_val_score(pipeline, X_train, y_train, cv=k, \n scoring='neg_mean_absolute_error')\nprint(f'MAE for {k} folds:', -scores)\nprint('Mean of Scores: ', -scores.mean())\nprint('Standard Deviation of Scores: ', scores.std())\nprint('Absolute Scores:', abs(scores.std()/scores.mean()))",
"MAE for 3 folds: [412.16837219 408.79720129 411.15995068]\nMean of Scores: 410.70850805232976\nStandard Deviation of Scores: 1.4128101196260034\nAbsolute Scores: 0.0034399338994116784\n"
]
],
[
[
"### Use Pipeline to Encode Categoricals and Fit a Random Forest",
"_____no_output_____"
]
],
[
[
"pipeline = make_pipeline(\n #ce.TargetEncoder(min_samples_leaf=1, smoothing=1),\n ce.OneHotEncoder(use_cat_names=True),\n SimpleImputer(strategy='median'), \n RandomForestClassifier(n_estimators=10, random_state=42, n_jobs=-1)\n)",
"_____no_output_____"
]
],
[
[
"### Get Model's Validation Accuracy and Test Accuracy",
"_____no_output_____"
]
],
[
[
"pipeline.fit(X_train, y_train)\n#y_pred = pipeline.predict(X_val)\n\nprint ('Training Accuracy', pipeline.score(X_train, y_train))\n\npipeline.fit(X_val, y_val)\nprint ('Validation Accuracy', pipeline.score(X_val, y_val))",
"Training Accuracy 0.9884200196270854\nValidation Accuracy 0.9919924634950542\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7358fd4b3dee86c3c67d06ed60406fcd354e858 | 106,914 | ipynb | Jupyter Notebook | .ipynb_checkpoints/ML_play_around-testing1-checkpoint.ipynb | bmskarate/LH_midterm_project | 0404eb5678ac67acc7fb7bce992f5bce1d017ccc | [
"MIT"
] | null | null | null | .ipynb_checkpoints/ML_play_around-testing1-checkpoint.ipynb | bmskarate/LH_midterm_project | 0404eb5678ac67acc7fb7bce992f5bce1d017ccc | [
"MIT"
] | null | null | null | .ipynb_checkpoints/ML_play_around-testing1-checkpoint.ipynb | bmskarate/LH_midterm_project | 0404eb5678ac67acc7fb7bce992f5bce1d017ccc | [
"MIT"
] | null | null | null | 43.638367 | 13,400 | 0.590624 | [
[
[
"import pandas as pd\npd.set_option('display.max_columns', None)\nimport numpy as np\nimport random\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(font_scale=1.4)\nimport copy\nimport pickle",
"_____no_output_____"
],
[
"df = pd.read_csv(\"data/flights_cleaned_no_outlier_iqr_with_delays.csv\")",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"# Prepare data for feature selection",
"_____no_output_____"
],
[
"# Feature selection",
"_____no_output_____"
]
],
[
[
"# https://scikit-learn.org/stable/modules/feature_selection.html",
"_____no_output_____"
],
[
"## After testing, found most suitable columns and will remap for final modelling",
"_____no_output_____"
],
[
"very_important_columns = [ # ran with what the test data can do\n 'fl_date', # get month and bin\n# 'op_unique_carrier', # most extensive name list\n# 'origin', # need 'origin' to merge weather but already merged! ;)\n# 'dest_airport_id', # not sure about this one\n 'crs_dep_time', # bin times\n# 'dep_time', # only using in TRAIN, to learn how other columns affect this\n# 'crs_arr_time',\n# 'arr_time', # only using in TRAIN, to learn how other columns affect this\n 'weather_type', # add weight values\n# 'passengers', # not sure about this one\n 'arr_delay' # so we can make a target column...\n] # important columns seem to be weather(4), time(bin), month(constant)\n'''\nAccording to plots:\nWeather weight: Snow=10, Rain=5, Cloudy=2, Sunny=1\nTime weight: 0-500 = 1, 501-1000 = 8, 1001-1500 = 10, 1501-2000 = 8, 2001 > = 5\nMonth weight = Oct = 1, Nov, Jan = 5, Dec = 10\n'''",
"_____no_output_____"
],
[
"df_ = df.filter(items=very_important_columns)",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
]
],
[
[
"# remapping crs_dep_time",
"_____no_output_____"
]
],
[
[
"# Time weight: 0-500 = 1, 501-1000 = 8, 1001-1500 = 10, 1501-2000 = 8, 2001 > = 5\ndf_.crs_dep_time = df_.crs_dep_time // 100\ncrs_dep_time_remap = {\n 0: 0.10,\n 1: 0.10, \n 2: 0.10,\n 3: 0.10, \n 4: 0.10,\n 5: 0.10, \n 6: 0.80,\n 7: 0.80, \n 8: 0.80,\n 9: 0.80, \n 10: 0.80,\n 11: 1, \n 12: 1,\n 13: 1, \n 14: 1,\n 15: 1, \n 16: 0.80,\n 17: 0.80,\n 18: 0.80,\n 19: 0.80, \n 20: 0.80,\n 21: 0.50, \n 22: 0.50, \n 23: 0.50\n}\ndf_[\"dep_time_hour_weight\"] = df_.crs_dep_time.map(crs_dep_time_remap)",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
],
[
"df_.isna().sum()",
"_____no_output_____"
]
],
[
[
"# remapping fl_date to month",
"_____no_output_____"
]
],
[
[
"df_[\"month\"] = [ i [5:7] for i in df_.fl_date ]\n# change to datetime and get day of week",
"_____no_output_____"
],
[
"df_",
"_____no_output_____"
],
[
"# don't drop next time\ndf_ = df_.drop(labels=\"fl_date\", axis=1)",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
],
[
"df_.isna().sum()",
"_____no_output_____"
],
[
"df_.month.unique()",
"_____no_output_____"
],
[
"# Month weight = Oct = 1, Nov, Jan = 5, Dec = 10\nmonth_remap = { \n '10': 0.10,\n '11': 0.50, \n '12': 1,\n '01': 0.50\n}\ndf_[\"month_weight\"] = df_.month.map(month_remap)",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
]
],
[
[
"# remapping weather",
"_____no_output_____"
]
],
[
[
"df_.weather_type.unique()",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
],
[
"df_ = pd.get_dummies(df_, columns=['weather_type'], drop_first=True)",
"_____no_output_____"
],
[
"# # Weather weight: Snow=10, Rain=5, Cloudy=2, Sunny=1\n# weather_remap = {\n# \"Rainy\": 0.50,\n# \"Sunny\": 0.10, \n# \"Snowy\": 1,\n# \"Cloudy\": 0.20\n# }\n# df_.weather_type = df_.weather_type.map(weather_remap)",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
],
[
"df_.isna().sum()",
"_____no_output_____"
],
[
"# # Used dummies before, got 0.03 to 0.06 results. Trying feature selection/engineering next.\n# df_dummies = pd.get_dummies(df_, columns=['weather_type'])\n# df_dummies = pd.get_dummies(df_dummies, columns=['op_unique_carrier'])\n# df_dummies = pd.get_dummies(df_dummies, columns=['origin'])",
"_____no_output_____"
],
[
"df_.head()",
"_____no_output_____"
],
[
"sns.histplot(df_.arr_delay);",
"_____no_output_____"
]
],
[
[
"# Smote and balance",
"_____no_output_____"
]
],
[
[
"df_checkpoint = df_.copy()\ndf_checkpoint = df_checkpoint.sample(frac=0.25)",
"_____no_output_____"
],
[
"X = df_checkpoint[df_checkpoint.columns.difference(['arr_delay'])]\ny = df_checkpoint[\"arr_delay\"]",
"_____no_output_____"
],
[
"print(X.shape)\nprint(y.shape)",
"(518279, 7)\n(518279,)\n"
],
[
"y = pd.DataFrame(y)",
"_____no_output_____"
],
[
"y[y < 0] = 0",
"_____no_output_____"
],
[
"y.shape",
"_____no_output_____"
],
[
"sns.histplot(y); # super imbalanced.",
"_____no_output_____"
],
[
"# check version number\nimport imblearn\n# transform the dataset\nfrom collections import Counter\nfrom sklearn.datasets import make_classification\nfrom imblearn.over_sampling import SMOTE \noversample = SMOTE()\nX, y = oversample.fit_resample(X, y)",
"_____no_output_____"
],
[
"print(X.shape)\nprint(y.shape)",
"(16811640, 7)\n(16811640, 1)\n"
],
[
"sns.histplot(y);",
"_____no_output_____"
]
],
[
[
"## 16 MILLION ROWS but balanced.",
"_____no_output_____"
]
],
[
[
"y.arr_delay",
"_____no_output_____"
],
[
"# remerge y to X... sample frac... resplit.\nX[\"arr_delay\"] = y.arr_delay\nX_checkpoint = X.copy()\nX_checkpoint = X_checkpoint.sample(frac=0.15)",
"_____no_output_____"
],
[
"X = X_checkpoint[X_checkpoint.columns.difference(['arr_delay'])]\ny = X_checkpoint[\"arr_delay\"]",
"_____no_output_____"
],
[
"y = pd.DataFrame(y)",
"_____no_output_____"
],
[
"print(X.shape)\nprint(y.shape)",
"(2521746, 7)\n(2521746, 1)\n"
]
],
[
[
"## Main Task: Regression Problem\nThe target variable is ARR_DELAY. We need to be careful which columns to use and which don't. For example, DEP_DELAY is going to be the perfect predictor, but we can't use it because in real-life scenario, we want to predict the delay before the flight takes of --> We can use average delay from earlier days but not the one from the actual flight we predict.\nFor example, variables CARRIER_DELAY, WEATHER_DELAY, NAS_DELAY, SECURITY_DELAY, LATE_AIRCRAFT_DELAY shouldn't be used directly as predictors as well. However, we can create various transformations from earlier values.\nWe will be evaluating your models by predicting the ARR_DELAY for all flights 1 week in advance.",
"_____no_output_____"
],
[
"#### linear / logistic / multinomial logistic regression\n#### Naive Bayes\n#### Random Forest\n#### SVM\n#### XGBoost\n#### The ensemble of your own choice",
"_____no_output_____"
]
],
[
[
"# X = X.replace([np.inf, -np.inf], np.nan)\n# X = X.dropna()",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\nX_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.75,random_state=42)",
"_____no_output_____"
],
[
"from sklearn.linear_model import Lasso, Ridge, SGDRegressor, ElasticNet\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.svm import SVR\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.model_selection import RepeatedKFold\nfrom sklearn.model_selection import cross_val_score\nfrom numpy import absolute\nfrom numpy import mean\nfrom numpy import std",
"_____no_output_____"
]
],
[
[
"## Naive Bayes Model",
"_____no_output_____"
]
],
[
[
"# 0.0361 score\nfrom sklearn import naive_bayes\ngnb = naive_bayes.GaussianNB()\ngnb.fit(X_train, y_train)\ny_pred = gnb.predict(X_test)\nfrom sklearn import metrics\nprint(metrics.accuracy_score(y_test, y_pred))\n\n# save the model to disk\nfilename = 'finalized_Naive_Bayes.sav'\npickle.dump(gnb, open(filename, 'wb'))",
"/Users/louisrossi/opt/anaconda3/envs/ml/lib/python3.8/site-packages/sklearn/utils/validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n return f(*args, **kwargs)\n"
]
],
[
[
"## Lasso (not good)",
"_____no_output_____"
]
],
[
[
"# 0.060 score unscaled: scaled data 0.041: after trimming huge 0.034\nmodel = Lasso(alpha=0.5)\ncv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=42)\nscores = cross_val_score(model, X_train, y_train, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\n# force scores to be positive\nscores = absolute(scores)\nprint('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores)))",
"_____no_output_____"
]
],
[
[
"## Random Forest Classifier Model",
"_____no_output_____"
]
],
[
[
"# 0.036 score unscaled: scaled same\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\nclf = RandomForestClassifier(max_depth=3, random_state=42, n_jobs=-1)\nclf.fit(X_train, y_train)\n\ny_pred = clf.predict(X_test)\n\n# 0.03 score\nfrom sklearn.metrics import accuracy_score\naccuracy = accuracy_score(y_test,y_pred)\nprint(accuracy)\n\n# save the model to disk\nfilename = 'finalized_Random_forest.sav'\npickle.dump(clf, open(filename, 'wb'))",
"_____no_output_____"
],
[
"from sklearn import metrics\nfrom sklearn.metrics import log_loss, roc_auc_score, recall_score, precision_score, average_precision_score, f1_score, classification_report, accuracy_score, plot_roc_curve, plot_precision_recall_curve, plot_confusion_matrix\n\nprint(\"Confusion Matrix\")\nplot_confusion_matrix(clf, X_test, y_test)",
"_____no_output_____"
]
],
[
[
"## Gridsearch cells. Do not run.",
"_____no_output_____"
]
],
[
[
"# # parameter grid\n# parameter_candidates = {\n# 'n_estimators':[270, 285, 300],\n# 'max_depth':[3]\n# }\n# from sklearn import datasets, svm\n# from sklearn.model_selection import GridSearchCV\n# grid_result = GridSearchCV(clf, param_grid=parameter_candidates, n_jobs=-1)\n# the_fit = grid_result.fit(X_train, y_train.values.ravel())\n# bestresult = grid_result.best_estimator_",
"_____no_output_____"
],
[
"# # View the accuracy score best run: MD3, nest300 score:0.04\n# print('Best score for data1:', grid_result.best_score_) \n# print(grid_result.best_params_)\n# print(bestresult)\n# grid_result.score(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"## Random Forest tuned",
"_____no_output_____"
]
],
[
[
"# 0.036 score unscaled frac=0.25 : scaled full data score SAME 0.036\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\nclf = RandomForestClassifier(max_depth=3, n_estimators=285, random_state=42, n_jobs=-1)\nclf.fit(X_train, y_train)\n\ny_pred = clf.predict(X_test)\n\n# score\nfrom sklearn.metrics import accuracy_score\naccuracy = accuracy_score(y_test,y_pred)\nprint(accuracy)\nprint(y_test)\nprint(y_pred)",
"_____no_output_____"
]
],
[
[
"## Linear/Log Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\nreg = LinearRegression().fit(X_train, y_train)\nprint(reg.score(X_train, y_train))\n\n# save the model to disk\nfilename = 'finalized_LinReg.sav'\npickle.dump(reg, open(filename, 'wb'))",
"_____no_output_____"
],
[
"reg.coef_",
"_____no_output_____"
],
[
"reg.intercept_",
"_____no_output_____"
]
],
[
[
"## Decision Tree",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\nfrom sklearn import metrics\nclf_dt = DecisionTreeClassifier()\nclf_dt = clf_dt.fit(X_train,y_train)\ny_pred = clf_dt.predict(X_test)\nprint(\"Accuracy:\",metrics.accuracy_score(y_test, y_pred))\n\n# save the model to disk\nfilename = 'finalized_Decision_Tree.sav'\npickle.dump(clf_dt, open(filename, 'wb'))",
"_____no_output_____"
],
[
"# How do I visualize a tree?",
"_____no_output_____"
]
],
[
[
"## SVM (do not run)",
"_____no_output_____"
]
],
[
[
"# from sklearn.preprocessing import StandardScaler\n# from sklearn.preprocessing import Normalizer\n# scaler = StandardScaler()\n# scaler.fit(df_checkpoint)\n# X = scaler.transform(df_checkpoint.loc[:, df_checkpoint.columns != 'arr_delay'])\n# X = df_checkpoint[df_checkpoint.columns.difference(['arr_delay'])]\n# y = df_checkpoint[\"arr_delay\"]",
"_____no_output_____"
],
[
"# from sklearn import svm\n# clf = svm.SVC(kernel='poly')\n# clf.fit(X_train, y_train.values.ravel())\n# y_pred = clf.predict(X_test)",
"_____no_output_____"
],
[
"# from sklearn.metrics import confusion_matrix\n# confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"# clf2 = svm.SVC(kernel='rbf')\n# clf2.fit(X_train, y_train.values.ravel())\n# y_pred2 = clf2.predict(X_test)",
"_____no_output_____"
],
[
"# from sklearn.metrics import confusion_matrix\n# confusion_matrix(y_test, y_pred2)",
"_____no_output_____"
],
[
"# clf3 = svm.SVC(kernel='sigmoid')\n# clf3.fit(X_train, y_train.values.ravel())\n# y_pred3 = clf3.predict(X_test)",
"_____no_output_____"
],
[
"# from sklearn.metrics import confusion_matrix\n# confusion_matrix(y_test, y_pred3)",
"_____no_output_____"
],
[
"# from sklearn import metrics\n# print(\"Accuracy poly:\",metrics.accuracy_score(y_test, y_pred))\n# print(\"Accuracy rbg:\",metrics.accuracy_score(y_test, y_pred2))\n# print(\"Accuracy sigmoid:\",metrics.accuracy_score(y_test, y_pred3))",
"_____no_output_____"
]
],
[
[
"## XGBoost",
"_____no_output_____"
]
],
[
[
"# import xgboost as xgb\n# from sklearn.metrics import mean_squared_error\n# data_dmatrix = xgb.DMatrix(data=X, label=y)\n# xg_reg = xgb.XGBRegressor(objective ='reg:linear', # not XGBClassifier() bc regression.\n# colsample_bytree = 0.3, \n# learning_rate = 0.1,\n# max_depth = 3, \n# alpha = 10, \n# n_estimators = 250)",
"_____no_output_____"
],
[
"# Import Test Dataset, clean/merge, export with y column to CSV for submission",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e735905647c0b690d6b93484562f009f919162f0 | 807,105 | ipynb | Jupyter Notebook | tutorials/Tutorial_8_Semisupervised_Learning.ipynb | nlzimmerman/pomegranate | f18ef2c20ba132f5b1d2e00cf28c5ff3a505bb38 | [
"MIT"
] | null | null | null | tutorials/Tutorial_8_Semisupervised_Learning.ipynb | nlzimmerman/pomegranate | f18ef2c20ba132f5b1d2e00cf28c5ff3a505bb38 | [
"MIT"
] | null | null | null | tutorials/Tutorial_8_Semisupervised_Learning.ipynb | nlzimmerman/pomegranate | f18ef2c20ba132f5b1d2e00cf28c5ff3a505bb38 | [
"MIT"
] | 3 | 2018-02-20T00:34:55.000Z | 2020-12-21T13:14:16.000Z | 1,660.709877 | 394,552 | 0.946651 | [
[
[
"# Semi-supervised Learning in pomegranate\n\nMost classical machine learning algorithms either assume that an entire dataset is either labeled (supervised learning) or that there are no labels (unsupervised learning). However, frequently it is the case that some labeled data is present but there is a great deal of unlabeled data as well. A great example of this is that of computer vision where the internet is filled of pictures (mostly of cats) that could be useful, but you don't have the time or money to label them all in accordance with your specific task. Typically what ends up happening is that either the unlabeled data is discarded in favor of training a model solely on the labeled data, or that an unsupervised model is initialized with the labeled data and then set free on the unlabeled data. Neither method uses both sets of data in the optimization process.\n\nSemi-supervised learning is a method to incorporate both labeled and unlabeled data into the training task, typically yield better performing estimators than using the labeled data alone. There are many methods one could use for semisupervised learning, and <a href=\"http://scikit-learn.org/stable/modules/label_propagation.html\">scikit-learn has a good write-up on some of these techniques</a>.\n\npomegranate natively implements semi-supervised learning through the a merger of maximum-likelihood and expectation-maximization. As an overview, the models are initialized by first fitting to the labeled data directly using maximum-likelihood estimates. The models are then refined by running expectation-maximization (EM) on the unlabeled datasets and adding the sufficient statistics to those acquired from maximum-likelihood estimates on the labeled data. Under the hood both a supervised model and an unsupervised mixture model are created using the same underlying distribution objects. The summarize method is first called using the supervised method on the labeled data, and then the summarize method is called again using the unsupervised method on the unlabeled data. This causes the sufficient statistics to be updated appropriately given the results of first maximum-likelihood and then EM. This process continues until convergence in the EM step.\n\nLet's take a look!",
"_____no_output_____"
]
],
[
[
"%pylab inline\nfrom pomegranate import *\nfrom sklearn.semi_supervised import LabelPropagation\nfrom sklearn.datasets import make_blobs\nimport seaborn, time\nseaborn.set_style('whitegrid')\nnumpy.random.seed(1)",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"Let's first generate some data in the form of blobs that are close together. Generally one tends to have far more unlabeled data than labeled data, so let's say that a person only has 50 samples of labeled training data and 4950 unlabeled samples. In pomegranate you a sample can be specified as lacking a label by providing the integer -1 as the label, just like in scikit-learn. Let's also say there there is a bit of bias in the labeled samples to inject some noise into the problem, as otherwise Gaussian blobs are trivially modeled with even a few samples.",
"_____no_output_____"
]
],
[
[
"X, y = make_blobs(10000, 2, 3, cluster_std=2)\nx_min, x_max = X[:,0].min()-2, X[:,0].max()+2\ny_min, y_max = X[:,1].min()-2, X[:,1].max()+2\n\nX_train = X[:5000]\ny_train = y[:5000]\n\n# Set the majority of samples to unlabeled.\ny_train[numpy.random.choice(5000, size=4950, replace=False)] = -1\n\n# Inject noise into the problem\nX_train[y_train != -1] += 2.5\n\nX_test = X[5000:]\ny_test = y[5000:]",
"_____no_output_____"
]
],
[
[
"Now let's take a look at the data when we plot it.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8, 8))\nplt.scatter(X_train[y_train == -1, 0], X_train[y_train == -1, 1], color='0.6')\nplt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], color='c')\nplt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], color='m')\nplt.scatter(X_train[y_train == 2, 0], X_train[y_train == 2, 1], color='r')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The clusters of unlabeled data seem clear to us, and it doesn't seem like the labeled data is perfectly faithful to these clusters. This can typically happen in a semisupervised setting as well, as the data that is labeled is sometimes biased either because the labeled data was chosen as it was easy to label, or the data was chosen to be labeled in a biased maner.\n\nNow let's try fitting a simple naive Bayes classifier to this data and compare the results when using only the labeled data to when using both the labeled and unlabeled data together.",
"_____no_output_____"
]
],
[
[
"model_a = NaiveBayes.from_samples(NormalDistribution, X_train[y_train != -1], y_train[y_train != -1])\nprint \"Supervised Learning Accuracy: {}\".format((model_a.predict(X_test) == y_test).mean())\n\nmodel_b = NaiveBayes.from_samples(NormalDistribution, X_train, y_train)\nprint \"Semisupervised Learning Accuracy: {}\".format((model_b.predict(X_test) == y_test).mean())",
"Supervised Learning Accuracy: 0.8706\nSemisupervised Learning Accuracy: 0.9274\n"
]
],
[
[
"It seems like we get a big bump in test set accuracy when we use semi-supervised learning. Let's visualize the data to get a better sense of what is happening here.",
"_____no_output_____"
]
],
[
[
"def plot_contour(X, y, Z):\n plt.scatter(X[y == -1, 0], X[y == -1, 1], color='0.2', alpha=0.5, s=20)\n plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', s=20)\n plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', s=20)\n plt.scatter(X[y == 2, 0], X[y == 2, 1], color='r', s=20)\n plt.contour(xx, yy, Z)\n plt.xlim(x_min, x_max)\n plt.ylim(y_min, y_max)\n plt.xticks(fontsize=14)\n plt.yticks(fontsize=14)\n\nxx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, 0.1), numpy.arange(y_min, y_max, 0.1))\nZ1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\nZ2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n\nplt.figure(figsize=(16, 16))\nplt.subplot(221)\nplt.title(\"Training Data, Supervised Boundaries\", fontsize=16)\nplot_contour(X_train, y_train, Z1)\n\nplt.subplot(223)\nplt.title(\"Training Data, Semi-supervised Boundaries\", fontsize=16)\nplot_contour(X_train, y_train, Z2)\n\nplt.subplot(222)\nplt.title(\"Test Data, Supervised Boundaries\", fontsize=16)\nplot_contour(X_test, y_test, Z1)\n\nplt.subplot(224)\nplt.title(\"Test Data, Semi-supervised Boundaries\", fontsize=16)\nplot_contour(X_test, y_test, Z2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The contours plot the decision boundaries between the different classes with the left figures corresponding to the partially labeled training set and the right figures corresponding to the test set. We can see that the boundaries learning using only the labeled data look a bit weird when considering the unlabeled data, particularly in that it doesn't cleanly separate the cyan cluster from the other two. In addition, it seems like the boundary between the magenta and red clusters is a bit curved in an unrealistic way. We would not expect points that fell around (-18, -7) to actually come from the red class. Training the model in a semi-supervised manner cleaned up both of these concerns by learning better boundaries that are also flatter and more generalizable.\n\nLet's next compare the training times to see how much slower it is to do semi-supervised learning than it is to do simple supervised learning.",
"_____no_output_____"
]
],
[
[
"print \"Supervised Learning: \"\n%timeit NaiveBayes.from_samples(NormalDistribution, X_train[y_train != -1], y_train[y_train != -1])\nprint\nprint \"Semi-supervised Learning: \"\n%timeit NaiveBayes.from_samples(NormalDistribution, X_train, y_train)\nprint\nprint \"Label Propagation (sklearn): \"\n%timeit LabelPropagation().fit(X_train, y_train)",
"Supervised Learning: \n100 loops, best of 3: 1.94 ms per loop\n\nSemi-supervised Learning: \n1 loop, best of 3: 961 ms per loop\n\nLabel Propagation (sklearn): \n1 loop, best of 3: 4.11 s per loop\n"
]
],
[
[
"It is quite a bit slower to do semi-supervised learning than simple supervised learning in this example. This is expected as the simple supervised update for naive Bayes is a trivial MLE across each dimension whereas the semi-supervised case requires EM to converge to complete. However, it is still faster to do semi-supervised learning this setting to learn a naive Bayes classifier than it is to fit the label propagation estimator from sklearn. \n\nHowever, though it is widely used, the naive Bayes classifier is still a fairly simple model. One can construct a more complicated model that does not assume feature independence called a Bayes classifier that can also be trained using semi-supervised learning in pretty much the same manner. You can read more about the Bayes classifier in its tutorial in the tutorial folder. Let's move on to more complicated data and try to fit a mixture model Bayes classifier, comparing the performance between using only labeled data and using all data.\n\nFirst let's generate some more complicated, noisier data.",
"_____no_output_____"
]
],
[
[
"X = numpy.empty(shape=(0, 2))\nX = numpy.concatenate((X, numpy.random.normal(4, 1, size=(300, 2)).dot([[-2, 0.5], [2, 0.5]])))\nX = numpy.concatenate((X, numpy.random.normal(3, 1, size=(650, 2)).dot([[-1, 2], [1, 0.8]])))\nX = numpy.concatenate((X, numpy.random.normal(7, 1, size=(800, 2)).dot([[-0.75, 0.8], [0.9, 1.5]])))\nX = numpy.concatenate((X, numpy.random.normal(6, 1, size=(220, 2)).dot([[-1.5, 1.2], [0.6, 1.2]])))\nX = numpy.concatenate((X, numpy.random.normal(8, 1, size=(350, 2)).dot([[-0.2, 0.8], [0.7, 0.8]])))\nX = numpy.concatenate((X, numpy.random.normal(9, 1, size=(650, 2)).dot([[-0.0, 0.8], [0.5, 1.2]])))\nx_min, x_max = X[:,0].min()-2, X[:,0].max()+2\ny_min, y_max = X[:,1].min()-2, X[:,1].max()+2\n\ny = numpy.concatenate((numpy.zeros(950), numpy.ones(1020), numpy.ones(1000)*2))\nidxs = numpy.arange(2970)\nnumpy.random.shuffle(idxs)\n\nX = X[idxs]\ny = y[idxs]\n\nX_train, X_test = X[:2500], X[2500:]\ny_train, y_test = y[:2500], y[2500:]\ny_train[numpy.random.choice(2500, size=2450, replace=False)] = -1\n\nplt.scatter(X_train[y_train == -1, 0], X_train[y_train == -1, 1], color='0.6')\nplt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], color='c')\nplt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], color='m')\nplt.scatter(X_train[y_train == 2, 0], X_train[y_train == 2, 1], color='r')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now let's take a look at the accuracies that we get when training a model using just the labeled examples versus all of the examples in a semi-supervised manner.",
"_____no_output_____"
]
],
[
[
"d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0])\nd2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1])\nd3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2])\nmodel_a = BayesClassifier([d1, d2, d3]).fit(X_train[y_train != -1], y_train[y_train != -1])\nprint \"Supervised Learning Accuracy: {}\".format((model_a.predict(X_test) == y_test).mean())\n\nd1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0])\nd2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1])\nd3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2])\nmodel_b = BayesClassifier([d1, d2, d3])\nmodel_b.fit(X_train, y_train)\nprint \"Semisupervised Learning Accuracy: {}\".format((model_b.predict(X_test) == y_test).mean())",
"Supervised Learning Accuracy: 0.929787234043\nSemisupervised Learning Accuracy: 0.96170212766\n"
]
],
[
[
"As expected, the semi-supervised method performs better, getting rid of nearly half of the errors. Let's visualize the landscape in the same manner as before in order to see why this is the case.",
"_____no_output_____"
]
],
[
[
"xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, 0.1), numpy.arange(y_min, y_max, 0.1))\nZ1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\nZ2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n\nplt.figure(figsize=(16, 16))\nplt.subplot(221)\nplt.title(\"Training Data, Supervised Boundaries\", fontsize=16)\nplot_contour(X_train, y_train, Z1)\n\nplt.subplot(223)\nplt.title(\"Training Data, Semi-supervised Boundaries\", fontsize=16)\nplot_contour(X_train, y_train, Z2)\n\nplt.subplot(222)\nplt.title(\"Test Data, Supervised Boundaries\", fontsize=16)\nplot_contour(X_test, y_test, Z1)\n\nplt.subplot(224)\nplt.title(\"Test Data, Semi-supervised Boundaries\", fontsize=16)\nplot_contour(X_test, y_test, Z2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Immediately, one would notice that the decision boundaries when using semi-supervised learning are smoother than those when using only a few samples. This can be explained mostly because having more data can generally lead to smoother decision boundaries as the model does not overfit to spurious examples in the dataset. It appears that the majority of the correctly classified samples come from having a more accurate decision boundary for the magenta samples in the left cluster. When using only the labeled samples many of the magenta samples in this region get classified incorrectly as cyan samples. In contrast, when using all of the data these points are all classified correctly.\n\nLastly, let's take a look at a time comparison in this more complicated example.",
"_____no_output_____"
]
],
[
[
"d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0])\nd2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1])\nd3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2])\nmodel = BayesClassifier([d1, d2, d3])\n\nprint \"Supervised Learning: \"\n%timeit model.fit(X_train[y_train != -1], y_train[y_train != -1])\nprint\n\nd1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0])\nd2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1])\nd3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2])\nmodel = BayesClassifier([d1, d2, d3])\n\nprint \"Semi-supervised Learning: \"\n%timeit model.fit(X_train, y_train)\n\nprint\nprint \"Label Propagation (sklearn): \"\n%timeit LabelPropagation().fit(X_train, y_train)",
"Supervised Learning: \n100 loops, best of 3: 3.73 ms per loop\n\nSemi-supervised Learning: \n10 loops, best of 3: 147 ms per loop\n\nLabel Propagation (sklearn): \n1 loop, best of 3: 1 s per loop\n"
]
],
[
[
"It looks like the difference, while still large, is not as large as in the previous example, being only a ~40x difference instead of a ~1000x difference. This is likely because even without the unlabeled data the supervised model is performing EM to train each of the mixtures that are the components of the Bayes classifier. Again, it is faster to do semi-supervised learning in this manner for generative models than it is to perform LabelPropagation.",
"_____no_output_____"
],
[
"## Summary\n\nIn the real world (ack) there are frequently situations where only a small fraction of the available data has useful labels. Semi-supervised learning provides a framework for leveraging both the labeled and unlabeled aspects of a dataset to learn a sophisticated estimator. In this case, semi-supervised learning plays well with probabilistic models as normal maximum likelihood estimates can be done on the labeled data and expectation-maximization can be run on the unlabeled data using the same distributions.\n\nThis notebook has covered how to implement semi-supervised learning in pomegranate using both naive Bayes and a Bayes classifier. All one has to do is set the labels of unlabeled samples to -1 and pomegranate will take care of the rest. This can be particularly useful when encountering complex, noisy, data in the real world that aren't neat Gaussian blobs.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7359a38a9df4eb1b2a934d92783b68a38479a3d | 53,940 | ipynb | Jupyter Notebook | 03_classification.ipynb | mbenkhemis/handson-ml | ba6be0fe7d0a0d29142edad621c42bc1394a3956 | [
"Apache-2.0"
] | null | null | null | 03_classification.ipynb | mbenkhemis/handson-ml | ba6be0fe7d0a0d29142edad621c42bc1394a3956 | [
"Apache-2.0"
] | null | null | null | 03_classification.ipynb | mbenkhemis/handson-ml | ba6be0fe7d0a0d29142edad621c42bc1394a3956 | [
"Apache-2.0"
] | null | null | null | 27.105528 | 821 | 0.569837 | [
[
[
"**Classification**",
"_____no_output_____"
],
[
"# Setup",
"_____no_output_____"
],
[
"First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:",
"_____no_output_____"
]
],
[
[
"# To support both python 2 and python 3\nfrom __future__ import division, print_function, unicode_literals\n\n# Common imports\nimport numpy as np\nimport os\n\n# to make this notebook's output stable across runs\nnp.random.seed(42)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"classification\"\n\ndef save_fig(fig_id, tight_layout=True):\n path = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID, fig_id + \".png\")\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format='png', dpi=300)",
"_____no_output_____"
]
],
[
[
"# MNIST",
"_____no_output_____"
]
],
[
[
"from keras.datasets import mnist\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_train=X_train.reshape(X_train.shape[0],28*28)\nX_test=X_test.reshape(X_test.shape[0],28*28)",
"_____no_output_____"
],
[
"28*28",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nsome_digit = X_train[36000]\nsome_digit_image = some_digit.reshape(28, 28)\nplt.imshow(some_digit_image, cmap = matplotlib.cm.binary,\n interpolation=\"nearest\")\nplt.axis(\"off\")\n\nsave_fig(\"some_digit_plot\")\nplt.show()",
"_____no_output_____"
],
[
"def plot_digit(data):\n image = data.reshape(28, 28)\n plt.imshow(image, cmap = matplotlib.cm.binary,\n interpolation=\"nearest\")\n plt.axis(\"off\")",
"_____no_output_____"
],
[
"# EXTRA\ndef plot_digits(instances, images_per_row=10, **options):\n size = 28\n images_per_row = min(len(instances), images_per_row)\n images = [instance.reshape(size,size) for instance in instances]\n n_rows = (len(instances) - 1) // images_per_row + 1\n row_images = []\n n_empty = n_rows * images_per_row - len(instances)\n images.append(np.zeros((size, size * n_empty)))\n for row in range(n_rows):\n rimages = images[row * images_per_row : (row + 1) * images_per_row]\n row_images.append(np.concatenate(rimages, axis=1))\n image = np.concatenate(row_images, axis=0)\n plt.imshow(image, cmap = matplotlib.cm.binary, **options)\n plt.axis(\"off\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(9,9))\nexample_images = np.r_[X_train[:12000:600], X_train[13000:30600:600], X_train[30600:60000:590]]\nplot_digits(example_images, images_per_row=10)\nsave_fig(\"more_digits_plot\")\nplt.show()",
"_____no_output_____"
],
[
"y_train[36000]",
"_____no_output_____"
],
[
"import numpy as np\n\nshuffle_index = np.random.permutation(60000)\nX_train, y_train = X_train[shuffle_index], y_train[shuffle_index]",
"_____no_output_____"
]
],
[
[
"# Binary classifier",
"_____no_output_____"
]
],
[
[
"y_train_5 = (y_train == 5)\ny_test_5 = (y_test == 5)",
"_____no_output_____"
],
[
"from sklearn.linear_model import SGDClassifier\n\nsgd_clf = SGDClassifier(max_iter=5, random_state=42)\nsgd_clf.fit(X_train, y_train_5)",
"_____no_output_____"
],
[
"sgd_clf.predict([some_digit])",
"_____no_output_____"
]
],
[
[
"### sklearn cross_val",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import cross_val_score\ncross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")",
"_____no_output_____"
]
],
[
[
"### reimplemented crossval",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import StratifiedKFold\nfrom sklearn.base import clone\n\nskfolds = StratifiedKFold(n_splits=3, random_state=42)\n\nfor train_index, test_index in skfolds.split(X_train, y_train_5):\n clone_clf = clone(sgd_clf)\n X_train_folds = X_train[train_index]\n y_train_folds = (y_train_5[train_index])\n X_test_fold = X_train[test_index]\n y_test_fold = (y_train_5[test_index])\n\n clone_clf.fit(X_train_folds, y_train_folds)\n y_pred = clone_clf.predict(X_test_fold)\n n_correct = sum(y_pred == y_test_fold)\n print(n_correct / len(y_pred))",
"_____no_output_____"
],
[
"from sklearn.base import BaseEstimator\nclass Never5Classifier(BaseEstimator):\n def fit(self, X, y=None):\n pass\n def predict(self, X):\n return np.zeros((len(X), 1), dtype=bool)",
"_____no_output_____"
],
[
"never_5_clf = Never5Classifier()\ncross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_predict\n\ny_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\n\nconfusion_matrix(y_train_5, y_train_pred)",
"_____no_output_____"
],
[
"y_train_perfect_predictions = y_train_5",
"_____no_output_____"
],
[
"confusion_matrix(y_train_5, y_train_perfect_predictions)",
"_____no_output_____"
],
[
"((true_negatives,false_positives),(false_negatives,true_positives)) = confusion_matrix(y_train_5, y_train_pred)",
"_____no_output_____"
],
[
"true_positives,false_positives,false_negatives,true_negatives",
"_____no_output_____"
],
[
"from sklearn.metrics import precision_score, recall_score\n\nprecision_score(y_train_5, y_train_pred)",
"_____no_output_____"
],
[
"true_positives / (true_positives + false_positives)",
"_____no_output_____"
],
[
"recall_score(y_train_5, y_train_pred)",
"_____no_output_____"
],
[
"true_positives / (true_positives + false_negatives)",
"_____no_output_____"
],
[
"from sklearn.metrics import f1_score\nf1_score(y_train_5, y_train_pred)",
"_____no_output_____"
],
[
"true_positives / (true_positives + (false_negatives + false_positives)/2)",
"_____no_output_____"
],
[
"y_scores = sgd_clf.decision_function([some_digit])\ny_scores",
"_____no_output_____"
],
[
"threshold = 0\ny_some_digit_pred = (y_scores > threshold)",
"_____no_output_____"
],
[
"y_some_digit_pred",
"_____no_output_____"
],
[
"threshold = 200000\ny_some_digit_pred = (y_scores > threshold)\ny_some_digit_pred",
"_____no_output_____"
],
[
"y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,\n method=\"decision_function\")",
"_____no_output_____"
],
[
"from sklearn.metrics import precision_recall_curve\n\nprecisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)",
"_____no_output_____"
],
[
"def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):\n plt.plot(thresholds, precisions[:-1], \"b--\", label=\"Precision\", linewidth=2)\n plt.plot(thresholds, recalls[:-1], \"g-\", label=\"Recall\", linewidth=2)\n plt.xlabel(\"Threshold\", fontsize=16)\n plt.legend(loc=\"upper left\", fontsize=16)\n plt.ylim([0, 1])\n\nplt.figure(figsize=(8, 4))\nplot_precision_recall_vs_threshold(precisions, recalls, thresholds)\nplt.xlim([-700000, 700000])\nsave_fig(\"precision_recall_vs_threshold_plot\")\nplt.show()",
"_____no_output_____"
],
[
"(y_train_pred == (y_scores > 0)).all()",
"_____no_output_____"
],
[
"y_train_pred_90 = (y_scores > 70000)",
"_____no_output_____"
],
[
"precision_score(y_train_5, y_train_pred_90)",
"_____no_output_____"
],
[
"recall_score(y_train_5, y_train_pred_90)",
"_____no_output_____"
],
[
"def plot_precision_vs_recall(precisions, recalls):\n plt.plot(recalls, precisions, \"b-\", linewidth=2)\n plt.xlabel(\"Recall\", fontsize=16)\n plt.ylabel(\"Precision\", fontsize=16)\n plt.axis([0, 1, 0, 1])\n\nplt.figure(figsize=(8, 6))\nplot_precision_vs_recall(precisions, recalls)\nsave_fig(\"precision_vs_recall_plot\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"# ROC curves",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import roc_curve\n\nfpr, tpr, thresholds = roc_curve(y_train_5, y_scores)",
"_____no_output_____"
],
[
"def plot_roc_curve(fpr, tpr, label=None):\n plt.plot(fpr, tpr, linewidth=2, label=label)\n plt.plot([0, 1], [0, 1], 'k--')\n plt.axis([0, 1, 0, 1])\n plt.xlabel('False Positive Rate', fontsize=16)\n plt.ylabel('True Positive Rate', fontsize=16)\n\nplt.figure(figsize=(8, 6))\nplot_roc_curve(fpr, tpr)\nsave_fig(\"roc_curve_plot\")\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_auc_score\n\nroc_auc_score(y_train_5, y_scores)",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nforest_clf = RandomForestClassifier(random_state=42)\ny_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,\n method=\"predict_proba\")",
"_____no_output_____"
],
[
"y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class\nfpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 6))\nplt.plot(fpr, tpr, \"b:\", linewidth=2, label=\"SGD\")\nplot_roc_curve(fpr_forest, tpr_forest, \"Random Forest\")\nplt.legend(loc=\"lower right\", fontsize=16)\nsave_fig(\"roc_curve_comparison_plot\")\nplt.show()",
"_____no_output_____"
],
[
"roc_auc_score(y_train_5, y_scores_forest)",
"_____no_output_____"
],
[
"y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)",
"_____no_output_____"
],
[
"precision_score(y_train_5, y_train_pred_forest)",
"_____no_output_____"
],
[
"recall_score(y_train_5, y_train_pred_forest)",
"_____no_output_____"
]
],
[
[
"# Multiclass classification",
"_____no_output_____"
]
],
[
[
"sgd_clf.fit(X_train, y_train)\nsgd_clf.predict([some_digit])",
"_____no_output_____"
],
[
"some_digit_scores = sgd_clf.decision_function([some_digit])\nsome_digit_scores",
"_____no_output_____"
],
[
"np.argmax(some_digit_scores)",
"_____no_output_____"
],
[
"sgd_clf.classes_",
"_____no_output_____"
],
[
"sgd_clf.classes_[5]",
"_____no_output_____"
],
[
"from sklearn.multiclass import OneVsOneClassifier\novo_clf = OneVsOneClassifier(SGDClassifier(max_iter=5, random_state=42))\novo_clf.fit(X_train, y_train)\novo_clf.predict([some_digit])",
"_____no_output_____"
],
[
"len(ovo_clf.estimators_)",
"_____no_output_____"
],
[
"forest_clf.fit(X_train, y_train)\nforest_clf.predict([some_digit])",
"_____no_output_____"
],
[
"forest_clf.predict_proba([some_digit])",
"_____no_output_____"
],
[
"cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring=\"accuracy\")",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train.astype(np.float64))\ncross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring=\"accuracy\")",
"_____no_output_____"
],
[
"y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)\nconf_mx = confusion_matrix(y_train, y_train_pred)\nconf_mx",
"_____no_output_____"
],
[
"def plot_confusion_matrix(matrix):\n \"\"\"If you prefer color and a colorbar\"\"\"\n fig = plt.figure(figsize=(8,8))\n ax = fig.add_subplot(111)\n cax = ax.matshow(matrix)\n fig.colorbar(cax)",
"_____no_output_____"
],
[
"plt.matshow(conf_mx, cmap=plt.cm.gray)\nsave_fig(\"confusion_matrix_plot\", tight_layout=False)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### normalizing confusion matrix",
"_____no_output_____"
]
],
[
[
"row_sums = conf_mx.sum(axis=1, keepdims=True)\nnorm_conf_mx = conf_mx / row_sums",
"_____no_output_____"
],
[
"np.fill_diagonal(norm_conf_mx, 0)\nplt.matshow(norm_conf_mx, cmap=plt.cm.gray)\nsave_fig(\"confusion_matrix_errors_plot\", tight_layout=False)\nplt.show()",
"_____no_output_____"
],
[
"cl_a, cl_b = 3, 5\nX_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]\nX_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]\nX_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]\nX_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]\n\nplt.figure(figsize=(8,8))\nplt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)\nplt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)\nplt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)\nplt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)\nsave_fig(\"error_analysis_digits_plot\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Multilabel classification",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier\n\ny_train_large = (y_train >= 7)\ny_train_odd = (y_train % 2 == 1)\ny_multilabel = np.c_[y_train_large, y_train_odd]\n\nknn_clf = KNeighborsClassifier()\nknn_clf.fit(X_train, y_multilabel)",
"_____no_output_____"
],
[
"knn_clf.predict([some_digit])",
"_____no_output_____"
]
],
[
[
"**Warning**: the following cell may take a very long time (possibly hours depending on your hardware).",
"_____no_output_____"
]
],
[
[
"y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)\nf1_score(y_multilabel, y_train_knn_pred, average=\"macro\")",
"_____no_output_____"
]
],
[
[
"# Multioutput classification",
"_____no_output_____"
]
],
[
[
"noise = np.random.randint(0, 100, (len(X_train), 784))\nX_train_mod = X_train + noise\nnoise = np.random.randint(0, 100, (len(X_test), 784))\nX_test_mod = X_test + noise\ny_train_mod = X_train\ny_test_mod = X_test",
"_____no_output_____"
],
[
"some_index = 5500\nplt.subplot(121); plot_digit(X_test_mod[some_index])\nplt.subplot(122); plot_digit(y_test_mod[some_index])\nsave_fig(\"noisy_digit_example_plot\")\nplt.show()",
"_____no_output_____"
],
[
"knn_clf.fit(X_train_mod, y_train_mod)\nclean_digit = knn_clf.predict([X_test_mod[some_index]])\nplot_digit(clean_digit)\nsave_fig(\"cleaned_digit_example_plot\")",
"_____no_output_____"
]
],
[
[
"# Extra material",
"_____no_output_____"
],
[
"## Dummy (ie. random) classifier",
"_____no_output_____"
]
],
[
[
"from sklearn.dummy import DummyClassifier\ndmy_clf = DummyClassifier()\ny_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method=\"predict_proba\")\ny_scores_dmy = y_probas_dmy[:, 1]",
"_____no_output_____"
],
[
"fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)\nplot_roc_curve(fprr, tprr)",
"_____no_output_____"
]
],
[
[
"## KNN classifier",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier\nknn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)\nknn_clf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_knn_pred = knn_clf.predict(X_test)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\naccuracy_score(y_test, y_knn_pred)",
"_____no_output_____"
],
[
"from scipy.ndimage.interpolation import shift\ndef shift_digit(digit_array, dx, dy, new=0):\n return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)\n\nplot_digit(shift_digit(some_digit, 5, 1, new=100))",
"_____no_output_____"
],
[
"X_train_expanded = [X_train]\ny_train_expanded = [y_train]\nfor dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):\n shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)\n X_train_expanded.append(shifted_images)\n y_train_expanded.append(y_train)\n\nX_train_expanded = np.concatenate(X_train_expanded)\ny_train_expanded = np.concatenate(y_train_expanded)\nX_train_expanded.shape, y_train_expanded.shape",
"_____no_output_____"
],
[
"knn_clf.fit(X_train_expanded, y_train_expanded)",
"_____no_output_____"
],
[
"y_knn_expanded_pred = knn_clf.predict(X_test)",
"_____no_output_____"
],
[
"accuracy_score(y_test, y_knn_expanded_pred)",
"_____no_output_____"
],
[
"ambiguous_digit = X_test[2589]\nknn_clf.predict_proba([ambiguous_digit])",
"_____no_output_____"
],
[
"plot_digit(ambiguous_digit)",
"_____no_output_____"
]
],
[
[
"# Exercise",
"_____no_output_____"
],
[
"## 1. An MNIST Classifier With Over 97% Accuracy\nHint:\nthe KNeighborsClassifier works quite well for this task; you just need to find good\nhyperparameter values (try a grid search on the weights and n_neighbors hyperparameters).\n",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV\n",
"_____no_output_____"
],
[
"grid_search.best_params_",
"_____no_output_____"
],
[
"grid_search.best_score_",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\n\ny_pred = grid_search.predict(X_test)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
]
],
[
[
"## 2. Data Augmentation\nWrite a function that can shift an MNIST image in any direction (left, right, up, or down) by one\npixel.\n5 Then, for each image in the training set, create four shifted copies (one per direction) and add\nthem to the training set. Finally, train your best model on this expanded training set and measure its\naccuracy on the test set. You should observe that your model performs even better now! This\ntechnique of artificially growing the training set is called data augmentation or training set\nexpansion.\n",
"_____no_output_____"
]
],
[
[
"from scipy.ndimage.interpolation import shift",
"_____no_output_____"
],
[
"def shift_image(image, dx, dy):\n pass",
"_____no_output_____"
],
[
"image = X_train[1000]\nshifted_image_down = shift_image(image, 0, 5)\nshifted_image_left = shift_image(image, -5, 0)\n\nplt.figure(figsize=(12,3))\nplt.subplot(131)\nplt.title(\"Original\", fontsize=14)\nplt.imshow(image.reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\nplt.subplot(132)\nplt.title(\"Shifted down\", fontsize=14)\nplt.imshow(shifted_image_down.reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\nplt.subplot(133)\nplt.title(\"Shifted left\", fontsize=14)\nplt.imshow(shifted_image_left.reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\nplt.show()",
"_____no_output_____"
],
[
"X_train_augmented = [image for image in X_train]\ny_train_augmented = [label for label in y_train]\n\nfor dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):\n for image, label in zip(X_train, y_train):\n X_train_augmented.append(shift_image(image, dx, dy))\n y_train_augmented.append(label)\n\nX_train_augmented = np.array(X_train_augmented)\ny_train_augmented = np.array(y_train_augmented)",
"_____no_output_____"
],
[
"shuffle_idx = np.random.permutation(len(X_train_augmented))\nX_train_augmented = X_train_augmented[shuffle_idx]\ny_train_augmented = y_train_augmented[shuffle_idx]",
"_____no_output_____"
],
[
"y_pred = ...\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
]
],
[
[
"## 3. Spam classifier\nDownload examples of spam and ham from Apache SpamAssassin’s public datasets.\n\nUnzip the datasets and familiarize yourself with the data format.\n\nSplit the datasets into a training set and a test set.\n\nWrite a data preparation pipeline to convert each email into a feature vector. Your preparation\npipeline should transform an email into a (sparse) vector indicating the presence or absence of\neach possible word. For example, if all emails only ever contain four words, “Hello,” “how,”\n“are,” “you,” then the email “Hello you Hello Hello you” would be converted into a vector [1,\n0, 0, 1] (meaning [“Hello” is present, “how” is absent, “are” is absent, “you” is present]), or\n[3, 0, 0, 2] if you prefer to count the number of occurrences of each word.\n\nYou may want to add hyperparameters to your preparation pipeline to control whether or not to\nstrip off email headers, convert each email to lowercase, remove punctuation, replace all URLs\nwith “URL,” replace all numbers with “NUMBER,” or even perform stemming (i.e., trim off\nword endings; there are Python libraries available to do this).\n\nThen try out several classifiers and see if you can build a great spam classifier, with both high\nrecall and high precision.",
"_____no_output_____"
],
[
"First, let's fetch the data:",
"_____no_output_____"
]
],
[
[
"import os\nimport tarfile\nfrom six.moves import urllib\n\nDOWNLOAD_ROOT = \"http://spamassassin.apache.org/old/publiccorpus/\"\nHAM_URL = DOWNLOAD_ROOT + \"20030228_easy_ham.tar.bz2\"\nSPAM_URL = DOWNLOAD_ROOT + \"20030228_spam.tar.bz2\"\nSPAM_PATH = os.path.join(\"datasets\", \"spam\")\n\ndef fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH):\n if not os.path.isdir(spam_path):\n os.makedirs(spam_path)\n for filename, url in ((\"ham.tar.bz2\", HAM_URL), (\"spam.tar.bz2\", SPAM_URL)):\n path = os.path.join(spam_path, filename)\n if not os.path.isfile(path):\n urllib.request.urlretrieve(url, path)\n tar_bz2_file = tarfile.open(path)\n tar_bz2_file.extractall(path=SPAM_PATH)\n tar_bz2_file.close()",
"_____no_output_____"
],
[
"fetch_spam_data()",
"_____no_output_____"
]
],
[
[
"Next, let's load all the emails:",
"_____no_output_____"
]
],
[
[
"HAM_DIR = os.path.join(SPAM_PATH, \"easy_ham\")\nSPAM_DIR = os.path.join(SPAM_PATH, \"spam\")\nham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]\nspam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]",
"_____no_output_____"
],
[
"len(ham_filenames)",
"_____no_output_____"
],
[
"len(spam_filenames)",
"_____no_output_____"
]
],
[
[
"We can use Python's `email` module to parse these emails (this handles headers, encoding, and so on):",
"_____no_output_____"
]
],
[
[
"import email\nimport email.policy\n\ndef load_email(is_spam, filename, spam_path=SPAM_PATH):\n directory = \"spam\" if is_spam else \"easy_ham\"\n with open(os.path.join(spam_path, directory, filename), \"rb\") as f:\n return email.parser.BytesParser(policy=email.policy.default).parse(f)",
"_____no_output_____"
],
[
"ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]\nspam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]",
"_____no_output_____"
]
],
[
[
"Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:",
"_____no_output_____"
]
],
[
[
"print(ham_emails[1].get_content().strip())",
"_____no_output_____"
],
[
"print(spam_emails[6].get_content().strip())",
"_____no_output_____"
]
],
[
[
"Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:",
"_____no_output_____"
]
],
[
[
"def get_email_structure(email):\n if isinstance(email, str):\n return email\n payload = email.get_payload()\n if isinstance(payload, list):\n return \"multipart({})\".format(\", \".join([\n get_email_structure(sub_email)\n for sub_email in payload\n ]))\n else:\n return email.get_content_type()",
"_____no_output_____"
],
[
"from collections import Counter\n\ndef structures_counter(emails):\n structures = Counter()\n for email in emails:\n structure = get_email_structure(email)\n structures[structure] += 1\n return structures",
"_____no_output_____"
],
[
"structures_counter(ham_emails).most_common()",
"_____no_output_____"
],
[
"structures_counter(spam_emails).most_common()",
"_____no_output_____"
]
],
[
[
"It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have.",
"_____no_output_____"
],
[
"Now let's take a look at the email headers:",
"_____no_output_____"
]
],
[
[
"for header, value in spam_emails[0].items():\n print(header,\":\",value)",
"_____no_output_____"
]
],
[
[
"There's probably a lot of useful information in there, such as the sender's email address ([email protected] looks fishy), but we will just focus on the `Subject` header:",
"_____no_output_____"
]
],
[
[
"spam_emails[0][\"Subject\"]",
"_____no_output_____"
]
],
[
[
"Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sklearn.model_selection import train_test_split\n\nX = np.array(ham_emails + spam_emails)\ny = np.array([0] * len(ham_emails) + [1] * len(spam_emails))\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)",
"_____no_output_____"
]
],
[
[
"Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of [un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment](https://stackoverflow.com/a/1732454/38626)). The following function first drops the `<head>` section, then converts all `<a>` tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as `>` or ` `):",
"_____no_output_____"
]
],
[
[
"import re\nfrom html import unescape\n\ndef html_to_plain_text(html):\n text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)\n text = re.sub('<a\\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)\n text = re.sub('<.*?>', '', text, flags=re.M | re.S)\n text = re.sub(r'(\\s*\\n)+', '\\n', text, flags=re.M | re.S)\n return unescape(text)",
"_____no_output_____"
]
],
[
[
"Let's see if it works. This is HTML spam:",
"_____no_output_____"
]
],
[
[
"html_spam_emails = [email for email in X_train[y_train==1]\n if get_email_structure(email) == \"text/html\"]\nsample_html_spam = html_spam_emails[7]\nprint(sample_html_spam.get_content().strip()[:1000], \"...\")",
"_____no_output_____"
]
],
[
[
"And this is the resulting plain text:",
"_____no_output_____"
]
],
[
[
"print(html_to_plain_text(sample_html_spam.get_content())[:1000], \"...\")",
"_____no_output_____"
]
],
[
[
"Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:",
"_____no_output_____"
]
],
[
[
"def email_to_text(email):\n html = None\n for part in email.walk():\n ctype = part.get_content_type()\n if not ctype in (\"text/plain\", \"text/html\"):\n continue\n try:\n content = part.get_content()\n except: # in case of encoding issues\n content = str(part.get_payload())\n if ctype == \"text/plain\":\n return content\n else:\n html = content\n if html:\n return html_to_plain_text(html)",
"_____no_output_____"
],
[
"print(email_to_text(sample_html_spam)[:100], \"...\")",
"_____no_output_____"
]
],
[
[
"Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit ([NLTK](http://www.nltk.org/)). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):\n\n`$ pip3 install nltk`",
"_____no_output_____"
]
],
[
[
"try:\n import nltk\n\n stemmer = nltk.PorterStemmer()\n for word in (\"Computations\", \"Computation\", \"Computing\", \"Computed\", \"Compute\", \"Compulsive\"):\n print(word, \"=>\", stemmer.stem(word))\nexcept ImportError:\n print(\"Error: stemming requires the NLTK module.\")\n stemmer = None",
"_____no_output_____"
]
],
[
[
"We will also need a way to replace URLs with the word \"URL\". For this, we could use hard core [regular expressions](https://mathiasbynens.be/demo/url-regex) but we will just use the [urlextract](https://github.com/lipoja/URLExtract) library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):\n\n`$ pip3 install urlextract`",
"_____no_output_____"
]
],
[
[
"try:\n import urlextract # may require an Internet connection to download root domain names\n \n url_extractor = urlextract.URLExtract()\n print(url_extractor.find_urls(\"Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s\"))\nexcept ImportError:\n print(\"Error: replacing URLs requires the urlextract module.\")\n url_extractor = None",
"_____no_output_____"
]
],
[
[
"We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's `split()` method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.",
"_____no_output_____"
]
],
[
[
"from sklearn.base import BaseEstimator, TransformerMixin\n\nclass EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):\n def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,\n replace_urls=True, replace_numbers=True, stemming=True):\n self.strip_headers = strip_headers\n self.lower_case = lower_case\n self.remove_punctuation = remove_punctuation\n self.replace_urls = replace_urls\n self.replace_numbers = replace_numbers\n self.stemming = stemming\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n X_transformed = []\n for email in X:\n text = email_to_text(email) or \"\"\n if self.lower_case:\n text = text.lower()\n if self.replace_urls and url_extractor is not None:\n urls = list(set(url_extractor.find_urls(text)))\n urls.sort(key=lambda url: len(url), reverse=True)\n for url in urls:\n text = text.replace(url, \" URL \")\n if self.replace_numbers:\n text = re.sub(r'\\d+(?:\\.\\d*(?:[eE]\\d+))?', 'NUMBER', text)\n if self.remove_punctuation:\n text = re.sub(r'\\W+', ' ', text, flags=re.M)\n word_counts = Counter(text.split())\n if self.stemming and stemmer is not None:\n stemmed_word_counts = Counter()\n for word, count in word_counts.items():\n stemmed_word = stemmer.stem(word)\n stemmed_word_counts[stemmed_word] += count\n word_counts = stemmed_word_counts\n X_transformed.append(word_counts)\n return np.array(X_transformed)",
"_____no_output_____"
]
],
[
[
"Let's try this transformer on a few emails:",
"_____no_output_____"
]
],
[
[
"X_few = X_train[:3]\nX_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)\nX_few_wordcounts",
"_____no_output_____"
]
],
[
[
"This looks about right!",
"_____no_output_____"
],
[
"Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose `fit()` method will build the vocabulary (an ordered list of the most common words) and whose `transform()` method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.",
"_____no_output_____"
]
],
[
[
"from scipy.sparse import csr_matrix\n\nclass WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):\n def __init__(self, vocabulary_size=1000):\n self.vocabulary_size = vocabulary_size\n def fit(self, X, y=None):\n total_count = Counter()\n for word_count in X:\n for word, count in word_count.items():\n total_count[word] += min(count, 10)\n most_common = total_count.most_common()[:self.vocabulary_size]\n self.most_common_ = most_common\n self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}\n return self\n def transform(self, X, y=None):\n rows = []\n cols = []\n data = []\n for row, word_count in enumerate(X):\n for word, count in word_count.items():\n rows.append(row)\n cols.append(self.vocabulary_.get(word, 0))\n data.append(count)\n return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))",
"_____no_output_____"
],
[
"vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)\nX_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)\nX_few_vectors",
"_____no_output_____"
],
[
"X_few_vectors.toarray()",
"_____no_output_____"
]
],
[
[
"What does this matrix mean? Well, the 64 in the third row, first column, means that the third email contains 64 words that are not part of the vocabulary. The 1 next to it means that the first word in the vocabulary is present once in this email. The 2 next to it means that the second word is present twice, and so on. You can look at the vocabulary to know which words we are talking about. The first word is \"of\", the second word is \"and\", etc.",
"_____no_output_____"
]
],
[
[
"vocab_transformer.vocabulary_",
"_____no_output_____"
]
],
[
[
"We are now ready to train our first spam classifier! Let's transform the whole dataset:",
"_____no_output_____"
]
],
[
[
"from sklearn.pipeline import Pipeline\n\npreprocess_pipeline = Pipeline([\n (\"email_to_wordcount\", EmailToWordCounterTransformer()),\n (\"wordcount_to_vector\", WordCounterToVectorTransformer()),\n])\n\nX_train_transformed = preprocess_pipeline.fit_transform(X_train)",
"_____no_output_____"
],
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_score\n\nlog_clf = LogisticRegression(random_state=42)\nscore = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)\nscore.mean()",
"_____no_output_____"
]
],
[
[
"Over 98.7%, not bad for a first try! :) However, remember that we are using the \"easy\" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.\n\nBut you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import precision_score, recall_score\n\nX_test_transformed = preprocess_pipeline.transform(X_test)\n\nlog_clf = LogisticRegression(random_state=42)\nlog_clf.fit(X_train_transformed, y_train)\n\ny_pred = log_clf.predict(X_test_transformed)\n\nprint(\"Precision: {:.2f}%\".format(100 * precision_score(y_test, y_pred)))\nprint(\"Recall: {:.2f}%\".format(100 * recall_score(y_test, y_pred)))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e735b52aa51b323599065ceaa1ce5c9388bc64de | 46,214 | ipynb | Jupyter Notebook | text-nlp.ipynb | lnsongxf/coding-for-economists | d54f47ec98a38a1b45de183a1f5c746f38941a8a | [
"MIT"
] | null | null | null | text-nlp.ipynb | lnsongxf/coding-for-economists | d54f47ec98a38a1b45de183a1f5c746f38941a8a | [
"MIT"
] | null | null | null | text-nlp.ipynb | lnsongxf/coding-for-economists | d54f47ec98a38a1b45de183a1f5c746f38941a8a | [
"MIT"
] | 1 | 2021-10-29T22:20:08.000Z | 2021-10-29T22:20:08.000Z | 40.046794 | 941 | 0.623621 | [
[
[
"# Natural Language Processing\n\nThis chapter covers text analysis, also known as natural language processing. We'll cover tokenisation of text, removing stop words, counting words, performing other statistics on words, and analysing the parts of speech. The focus here is on English, but many of the methods-and even the libraries-are relevant to other languages too.\n\n## Introduction\n\nWhen doing NLP, it's worth thinking carefully about the unit of analysis: is it a corpus, a text, a line, a paragraph, a sentence, a word, or even a character? It could also be two of these simultaneously, and working with document x token matrices is one very common way of doing NLP. Although we'll be mixing between a few of these in this chapter, thinking about what the block of text data you're working with will really help you keep track of what operations are being deployed and how they might interact.\n\nIn case it's also useful to know, three of the most loved NLP packages are [**nltk**](https://www.nltk.org/), [**spaCy**](https://spacy.io/), and [**gensim**](https://radimrehurek.com/gensim/). As you progress through the chapter, you should also bear in mind that some of the methods we'll see are computationally expensive and you might want to fall back on simpler approaches, such as those seen in the previous chapter, if you have large volumes of text.\n\nIn this chapter, we'll use a single example and using NLP on it in a few different ways. First, though, we need to read in the text data we'll be using, part of Adam Smith's *The Wealth of Nations* and do some light cleaning of it.\n\nInitially, we'll read in our text so that each new line appears on a different row of a **pandas** dataframe. We will end up working with it both as a vector of lines and, later, as a vector of lists of words. We'll also import the packages we'll need; remember, if you need these on your computer you may need to run `pip install packagename` on your own computer.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport string",
"_____no_output_____"
],
[
"df = pd.read_csv(\n \"https://github.com/aeturrell/coding-for-economists/raw/main/data/smith_won.txt\",\n delimiter=\"\\n\",\n names=[\"text\"],\n)\ndf.head()",
"_____no_output_____"
]
],
[
[
"We need to do a bit of light text cleaning before we get on to the more in-depth natural language processing. We'll make use of vectorised string operations as seen in the [Introduction to Text](text-intro) chapter. First, we want to put everything in lower case:\n",
"_____no_output_____"
]
],
[
[
"df[\"text\"] = df[\"text\"].str.lower()\ndf.head()",
"_____no_output_____"
]
],
[
[
"Next, we'll remove the punctuation from the text. You may not always wish to do this but it's a good default.",
"_____no_output_____"
]
],
[
[
"translator = string.punctuation.maketrans({x: \"\" for x in string.punctuation})\ndf[\"text\"] = df[\"text\"].str.translate(translator)\ndf.head()",
"_____no_output_____"
]
],
[
[
"Okay, we now have rows and rows of lower case words without punctuation.\n\n```{admonition} Exercise\nRemove all vowels from the vector of text using `str.translate`.\n```",
"_____no_output_____"
],
[
"While we're doing some text cleaning, let's also remove the excess whitespace found in, for example, the first entry. Leaning on the cleaning methods from the previous chapter, we'll use regular expressions to do this:\n",
"_____no_output_____"
]
],
[
[
"df[\"text\"] = df[\"text\"].str.replace(\"\\s+?\\W+\", \" \", regex=True)",
"_____no_output_____"
]
],
[
[
"This searches for multiple whitespaces that preceede non-word characters and replaces them with a single whitespace.",
"_____no_output_____"
],
[
"## Tokenisation\n\nWe're going to now see an example of tokenisation: the process of taking blocks of text and breaking them down into tokens, most commonly a word but potentially all one and two word pairs. Note that you might sometimes see all two word pairs referred to as 2-grams, with an n-gram being all phrases of n words. There are many ways to tokenise text; we'll look at two of the most common: using regular expressions and using pre-configured NLP packages.\n\n### Tokenisation with regular expressions\n\nBecause regular expressions excel at finding patterns in text, they can also be used to decide where to split text up into tokens. For a very simple example, let's take the first line of our text example:",
"_____no_output_____"
]
],
[
[
"import re\n\nword_pattern = r\"\\w+\"\ntokens = re.findall(word_pattern, df.iloc[0, 0])\ntokens",
"_____no_output_____"
]
],
[
[
"This produced a split of a single line into one word tokens that are represented by a list of strings. We could have also asked for other variations, eg sentences, by asking to split at every \".\". \n",
"_____no_output_____"
],
[
"### Tokenisation using NLP tools\n\nMany of the NLP packages available in Python come with built-in tokenisation tools. We'll use nltk for tokenisation.\n",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import word_tokenize\n\nword_tokenize(df.iloc[0, 0])",
"_____no_output_____"
]
],
[
[
"We have the same results as before when we used regex. Now let's scale this tokenisation up to our whole corpus while retaining the lines of text, giving us a structure of the form (lines x tokens):",
"_____no_output_____"
]
],
[
[
"df[\"tokens\"] = df[\"text\"].apply(lambda x: word_tokenize(x))\ndf.head()",
"_____no_output_____"
]
],
[
[
"**nltk** also has a `sent_tokenize` function that tokenises sentences, although as it makes use of punctuation you must take care with what pre-cleaning of text you undertake.\n",
"_____no_output_____"
],
[
"## Removing Stop Words\n\nStop words are frequent but uninformative words such as 'that', 'which', 'the', 'is', 'and', and 'but'. These words tend to be very common in the English language, but knowing that they appear frequently in a corpus doesn't really tell us much. Therefore, it is quite common to strip these 'stop' words out of text before doing any count-based analysis (or to use methods that implicitly ignore them). Many NLP libraries come with built-in methods that remove stop words. \n\nIn this example of removing stop words, we'll use the [**nltk**](https://www.nltk.org/) library. We'll filter out any stopwords from the first entry in the tokens columns of our dataframe. Note that stop are often an add-on to a base library, and so are not always available from installing a package alone-one often needs to download the stop words relevant to whatever language you're working with.\n",
"_____no_output_____"
]
],
[
[
"import nltk\n\nstopwords = nltk.corpus.stopwords.words(\n \"english\"\n) # Note that you may need to download these on your machine using nltk.download() within Python\nwords_filtered = [\n word.lower() for word in df.loc[0, \"tokens\"] if word.lower() not in stopwords\n]\nwords_filtered",
"_____no_output_____"
]
],
[
[
"Having filtered the first entry, we can see that words such as 'an' and 'into' have disappeared but we have retained more informative words such as 'inquiry' and 'nature'. Processing one entry is not enough: we need all of the lines to have stopwords removed. So we can now scale this up to the full corpus with **pandas**. Just as we did above, we'll use a list comprehension to do this: but we'll vectorise the list comprehension across the whole \"tokens\" series of our dataframe.",
"_____no_output_____"
]
],
[
[
"df[\"tokens\"] = df[\"tokens\"].apply(\n lambda x: [word.lower() for word in x if word.lower() not in stopwords]\n)\ndf.head()",
"_____no_output_____"
]
],
[
[
"Now we have a much reduced set of words in our tokens, which will make the next step of analysis more meaningful.",
"_____no_output_____"
],
[
"## Counting Text\n\nThere are several ways of performing basic counting statistics on text. We saw one in the previous chapter, `str.count()`, but that only applies to one word at a time. Often, we're interested in the relative counts of words in a corpus. In this section, we'll look at two powerful ways of computing this: using the `Counter` function and via term frequenc-inverse document frequency.\n\nFirst, `Counter`, which is a built-in Python library that does pretty much what you'd expect. Here's a simple example:\n",
"_____no_output_____"
]
],
[
[
"from collections import Counter\n\nfruit_list = [\n \"apple\",\n \"apple\",\n \"orange\",\n \"satsuma\",\n \"banana\",\n \"orange\",\n \"mango\",\n \"satsuma\",\n \"orange\",\n]\nfreq = Counter(fruit_list)\nfreq",
"_____no_output_____"
]
],
[
[
"Counter returns a `collections.Counter` object where the numbers of each type in a given input list are summed. The resulting dictionnary of unique counts can be extracted using `dict(freq)`, and `Counter` has some other useful functions too including `most_common()` which, given a number `n`, returns `n` tuples of the form `(thing, count)`:",
"_____no_output_____"
]
],
[
[
"freq.most_common(10)",
"_____no_output_____"
]
],
[
[
"Say we wanted to apply this not just to every line in our corpus separately, but to our whole corpus in one go; how would we do it? `Counter` will happily accept a list but our dataframe token column is currently a vector of lists. So we must first transform the token column to a single list of all tokens and then apply `Counter`. To achieve the former and flatten a list of lists, we'll use `itertools` chain function which makes an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables in all inputs are exhausted. For example, given `[a, b, c]` and `[d, e, f]` as arguments, this function would return `[a, b, c, d, e, f]`. Because this function accepts an arbitrary number of iterable arguments, we use the splat operator, aka `*`, to tell it to expect lots of different arguments. The second step using `Counter` is far more straightforward!",
"_____no_output_____"
]
],
[
[
"import itertools\n\nmerged_list = list(itertools.chain(*df[\"tokens\"].to_list()))\nfreq = Counter(merged_list)\nfreq.most_common(10)",
"_____no_output_____"
]
],
[
[
"Looking at the tuples representing the 10 most words in the corpus, there are some interesting patterns. \"price\" and \"labour\" are hardly surprises, while \"silver\" perhaps reflects the time in which the book was written a little more. \"one\", \"upon\", and \"may\" are candidates for context-specific stopwords; while our NLTK stopwords might work well for modern text, they omit words that were once more common but that are equally uninformative to the stopwords we did use. There's no reason why these words couldn't be added to our list of stopwords and the process re-run.\n\n```{admonition} Exercise\nExtend the list of stopwords to include 'may', 'upon', 'one', and 'much', re-create the filtered tokens, and compute the 10 most common terms.\n```",
"_____no_output_____"
],
[
"## Sentence Tokenisation (and reading in text as sentences)\n\nSo far we have been working with text that is split into lines and then tokenised into words. But working with lines of text is not always the most natural unit of analysis; sometimes sentences make more sense. So let's now work with sentences and see an example of tokenising those.\n\nFirst, we need to read in the text as sentences. We can't do this with pandas, because that package is limited to tabular data or very simple delimiters (like commas).\n\nIf we were working with a local file on our computer, we could read it in using the following code\n\n```python\nwith open('smith_won.txt') as f:\n raw_text = f.read()\n```\n\nAs it is, the text file we'd like to grab is on the web so we'll use a package that can grab files from the internet to get hold of it.",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get(\"https://github.com/aeturrell/coding-for-economists/raw/main/data/smith_won.txt\")\nraw_text = response.text\nraw_text[:100]",
"_____no_output_____"
]
],
[
[
"Great, so we have our raw text. Let's now tokenise it using **nltk**.\n",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import sent_tokenize\n\nsent_list = sent_tokenize(raw_text)\ndf_sent = pd.DataFrame({\"text\": sent_list})\ndf_sent.head()",
"_____no_output_____"
]
],
[
[
"Now we just need to apply all of the cleaning procudures we did before——that is lowering the case, removing punctuation, and removing any excess whitespace.",
"_____no_output_____"
]
],
[
[
"df_sent[\"text\"] = (df_sent[\"text\"]\n .str.lower()\n .str.translate(translator)\n .str.replace(\"\\s+?\\W+\", \" \", regex=True))\ndf_sent.head()",
"_____no_output_____"
]
],
[
[
"We'll use this tokenised version by sentence in the next section.",
"_____no_output_____"
],
[
"### TF-IDF\n\nTerm frequency - inverse document frequency, often referred to as *tf-idf*, is a measure of term counts (where terms could be 1-grams, 2-grams, etc.) that is weighted to try and identify the most *distinctively* frequent terms in a given corpus. It's made up of two parts: a term-frequency (which upweights according to counts of terms) and an inverse document frequency (which downweights terms that appear frequently across the corpus). Define $t$ as a term and $d$ as a document. In our example thus far, $t$ has represented words while our \"documents\" have been lines from *Wealth of Nations*. Then a simple formula for term frequency is:\n\n$$\n{\\displaystyle \\mathrm {tf} (t,d)={\\frac {f_{t,d}}{\\sum _{t'\\in d}{f_{t',d}}}}}\n$$\n\nwhere $f_{t,d}$ represents the frequency of term $t$ in document $d$. To compute term frequencies, we will use the [**sklearn**]() package, which has a function called `CountVectorizer`.",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import CountVectorizer\nimport numpy as np\n\nvectorizer = CountVectorizer(stop_words=stopwords)\nX = vectorizer.fit_transform(df[\"text\"])\nprint(f\"The shape of the resulting tf matrix is {np.shape(X)}\")\nvectorizer.get_feature_names()[500:510]",
"_____no_output_____"
]
],
[
[
"This created a matrix of 5,160 terms by 7,750 \"documents\" (actually sentences in our example) running with more or less the default settings. The only change we made to those default settings was to pass in a list of stopwords that we used earlier. The other default settings tokenise words using a regex of \"(?u)\\b\\w\\w+\\b\", assume text is lowercase, only accept n-grams in the range (1, 1), and have no limit on the maximum number of features.\n\nThe matrix X that comes out is of an interesting type:",
"_____no_output_____"
]
],
[
[
"type(X)",
"_____no_output_____"
]
],
[
[
"ie, it's a *sparse matrix*. Sparse matrices are more efficient for your computer when there are many missing zeros in a matrix. They do all of the usual things that matrices (arrays) do, but are just more convenient in this case. Most notably, we can perform counts with them and we can turn them into a regular matrix using `.toarray()`.\n\nLet's do some basic stats using the matrix of counts and the **matplotlib** visualisation library.",
"_____no_output_____"
]
],
[
[
"counts_df = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names()).T\ncounts_df = counts_df.sum(axis=1)\ncounts_df = counts_df.sort_values(ascending=False)\ncounts_df.head()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\n# Plot settings\nplt.style.use(\n \"https://github.com/aeturrell/coding-for-economists/raw/main/plot_style.txt\"\n)\n\n\nnum_to_plot = 20\nx_pos = np.arange(num_to_plot)\nfig, ax = plt.subplots()\nax.barh(x_pos, counts_df[:num_to_plot], align=\"center\", alpha=0.5)\nax.set_yticks(x_pos)\nax.set_yticklabels(counts_df[:num_to_plot].index, fontsize=8)\nax.set_ylim(-1, num_to_plot)\nax.set_xlabel('Count of terms across all \"documents\"')\nax.set_title(f\"The {num_to_plot} top 1-grams\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let's see what happens when we ask only for bi-grams.",
"_____no_output_____"
]
],
[
[
"# Count bigrams:\nvectorizer = CountVectorizer(stop_words=stopwords, ngram_range=(2, 2), max_features=300)\nbigrams_df = (\n pd.DataFrame(\n vectorizer.fit_transform(df[\"text\"]).toarray(),\n columns=vectorizer.get_feature_names(),\n )\n .T.sum(axis=1)\n .sort_values(ascending=False)\n)\n\n# Plot top n 2-grams\nnum_to_plot = 20\nx_pos = np.arange(num_to_plot)\nfig, ax = plt.subplots()\nax.barh(x_pos, bigrams_df[:num_to_plot], align=\"center\", alpha=0.5)\nax.set_yticks(x_pos)\nax.set_yticklabels(bigrams_df[:num_to_plot].index, fontsize=8)\nax.set_ylim(-1, num_to_plot)\nax.set_xlabel('Count of terms across all \"documents\"')\nax.set_title(f\"The {num_to_plot} top 2-grams\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"As you might expect, the highest frequency with which 2-grams occur is less than the highest frequency with which 1-grams occur.\n\nNow let's move on to the inverse document frequency. The most common definition is \n\n$$\n\\mathrm{idf}(t, D) = \\log \\frac{N}{|\\{d \\in D: t \\in d\\}|}\n$$\n\nwhere $D$ is the set of documents, $N=|D|$, and $|\\{d \\in D: t \\in d\\}|$ is the number of documents in which $t$ appears. Putting both together we have\n\n$$\n\\mathrm{tfidf}(t, d, D) = \\mathrm{tf}(t, d) \\cdot \\mathrm{idf}(t, D)\n$$\n\nBecause of power-law scaling, problems with zero-count entries, and other issues, this basic formula is often modified and the [wikipedia page](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) for tf-idf gives a good run-down of some common options.\n\nTo perform tfidf with code, we'll use another **sklearn** function, `TfidfVectorizer`.",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer\n\ntfidf_vectorizer = TfidfVectorizer(stop_words=stopwords, sublinear_tf=True)\nX = tfidf_vectorizer.fit_transform(df[\"text\"])\ncounts_tfidf = (\n pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())\n .T.sum(axis=1)\n .sort_values(ascending=False)\n)\n# Plot top n 1-grams\nnum_to_plot = 20\nx_pos = np.arange(num_to_plot)\nfig, ax = plt.subplots()\nax.barh(x_pos, counts_tfidf[:num_to_plot], align=\"center\", alpha=0.5)\nax.set_yticks(x_pos)\nax.set_yticklabels(counts_tfidf[:num_to_plot].index, fontsize=8)\nax.set_ylim(-1, num_to_plot)\nax.set_xlabel('tf-idf weighted terms across all \"documents\"')\nax.set_title(f\"The {num_to_plot} top 1-grams: tf-idf; X has shape {np.shape(X)}\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"There are small differences between this ranking of terms versus the original tf 1-gram version above. In the previous one, words such as 'one' were slightly higher in the ranking but their common appearance in multiple documents (lines) downweights them here. In this case, we also used the sublinear option, which uses $1+\\log(\\mathrm{tf})$ in place of $\\mathrm{tf}$.\n\n#### Vector Inner Product and Cosine Similarity\n\nBecause the output of tf or tf-idf is a matrix, many possibilities related to linear algebra are opened up. In particular, we can think of this the creation of a tf-idf matrix as definining a $|t| = T$ dimensional vector space that is spanned by the term vectors (which act like basis vectors). Each document in the corpus then has a vector representation in terms of the basis vectors. A consequence is that there is a sensible inner vector product defined on the space. As a demonstration, let's look for the line in the book that is closest to the title according to this vector space. The vector for the first line is just the first row in $X$. We take the argmax of the inner product with all of the *other* line vectors to find the entry in `X` that maximises the inner product.",
"_____no_output_____"
]
],
[
[
"max_val = np.argmax(np.dot(X[0, :], X[1:, :].T))\nprint(max_val)",
"_____no_output_____"
],
[
"print(\n f\"Cosine similarity is {round(np.dot(X[0], X[max_val+1].T).toarray().flatten()[0], 2)}\"\n)\nfor i, sent in enumerate(df.iloc[[0, max_val + 1], 0]):\n print(f\"Sentence {i}:\")\n print(\"\\t\" + sent.strip() + \"\\n\")",
"_____no_output_____"
]
],
[
[
"We can see from this example *why* the sentence we found is the most similar in the book to the title: it contains a phrase that is very similar to part of the title. It's worth noting here that tf-idf (and tf) do not care about *word order*, they only care about frequency, and so sometimes the most similar sentences are not what you would expect if you were judging similarity based on concepts. Another way of saying this is that the concept of 'similarity' as used by tf-idf is limited.",
"_____no_output_____"
],
[
"#### Transform versus Fit Transform\n\nThe `fit_transform` function we've seen is actually performing two operations here: i) create a vector space from the basis defined by terms from the text and ii) express each document (here, a sentence) as a vector in this vector space. But there's no reason why these two operations have to be linked. In fact, by separating out these two operations, we can do nifty things like express one text in the basis vectors of another. This is more useful in practice than you might think. It allows you to ask questions like, \"which of the texts in my reference corpus is most closest to these other texts?\", and more. We would ask this question by taking the inner vector product of the matrices expressing the two corpora, and find the rows of Wealth of Nations that have the greatest cosine similarity with the other texts.\n\nLet's see an example, with some test texts.",
"_____no_output_____"
]
],
[
[
"df_test = pd.DataFrame({\"text\": [\"poverty is a trap and rearing children in it is hard and perilous\",\n \"people in different trades can meet and develop a conspiracy which ultimately hurts consumers by raising prices\"]})\n",
"_____no_output_____"
]
],
[
[
"Now we need to i) create the vector space, ii) express WoN in the vector space, iii) express the test texts in the vector space, iv) find which rows of the WoN match best the test texts, and v) print out those rows.\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer\n\ntfidf_vectorizer = TfidfVectorizer(stop_words=stopwords, sublinear_tf=True)\n# i)\nmodel = tfidf_vectorizer.fit(df_sent[\"text\"])\n# ii)\nX = tfidf_vectorizer.transform(df_sent[\"text\"])\n# iii)\nY = tfidf_vectorizer.transform(df_test[\"text\"])\n# iv)\nmax_index_pos = np.argmax(X*Y.T, 0).tolist()[0]\nmax_index_pos",
"_____no_output_____"
]
],
[
[
"Now, armed with the rows of $X$, we are ready for the final part, v)",
"_____no_output_____"
]
],
[
[
"for y_pos, x_pos in enumerate(max_index_pos):\n print(f'Sentence number {y_pos}:')\n print(f' test: {df_test.loc[y_pos, \"text\"]}')\n print(f' WoN: {df_sent.loc[x_pos, \"text\"]} \\n')",
"_____no_output_____"
]
],
[
[
"#### Using a Special Vocabulary\n\nBut why should the basis vectors come from the terms in another text? Couldn't they come from anywhere? The answer is, of course, yes. We could choose any set of basis vectors we liked to define our vector space, and express a text in it. For this, we need a *special vocabulary*.\n\nLet's see an example of expressing the Wealth of Nations in a particularly vocab. First, we must define our vocab:",
"_____no_output_____"
]
],
[
[
"vocab = [\"work\",\n \"wage\",\n \"labour\",\n \"real price\",\n \"money price\",\n \"productivity\"]",
"_____no_output_____"
]
],
[
[
"That done, we now plug our special vocab into `CountVectorizer` to tell it to ignore anything that isn't relevant (isn't in our vocab).",
"_____no_output_____"
]
],
[
[
"vectorizer = CountVectorizer(vocabulary=vocab, ngram_range=(1, 2))\ncounts_df = (\n pd.DataFrame(\n vectorizer.fit_transform(df_sent[\"text\"]).toarray(),\n columns=vectorizer.get_feature_names(),\n )\n .T.sum(axis=1)\n .sort_values(ascending=False)\n)\n\n# Plot counts from our vocab\nnum_to_plot = len(vocab)\nx_pos = np.arange(num_to_plot)\nfig, ax = plt.subplots()\nax.barh(x_pos, counts_df[:num_to_plot], align=\"center\", alpha=0.5)\nax.set_yticks(x_pos)\nax.set_yticklabels(counts_df[:num_to_plot].index, fontsize=8)\nax.set_ylim(-1, num_to_plot)\nax.set_xlabel('Count of terms in corpus')\nax.set_title(f\"Counts of vocab words in the Wealth of Nations\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note that we did not pass `stopwords` in this case; there's no need, because passing a `vocab` effectively says to categorise any word that is *not* in the special vocabulary as a stopword. We also still passed an n-gram range to ensure our longest n-gram, with $n=2$, was counted.",
"_____no_output_____"
],
[
"### Filtering Out Frequent and Infrequent Words\n\nAs well as passing stopwords in, defining vocabularies, and limiting the n-gram range, there's another couple of ways to cut down on the number of terms that tf-idf takes account of. The first is to use the `max_features` setting to limit how many terms are tracked (this only keeps the top terms). A second is to have frequency cut-offs, both for very frequent words and for very infrequent words (be careful of this one if you're doing any kind of out-of-sample exercise such as forecasting.) The keywords for frequency cut-offs are `max_df` and `min_df`.",
"_____no_output_____"
],
[
"## Context of Terms\n\nIt's all very well counting terms, but without the context of the surrounding words, it may not be all that informative. **nltk** has some functions that can help us. First, we have to pass our raw text into an **nltk** text object.\n\n",
"_____no_output_____"
]
],
[
[
"from nltk.text import Text\n\nw_o_n = Text(word_tokenize(raw_text))",
"_____no_output_____"
]
],
[
[
"Now let's imagine we're interested in the context of a particular term, say 'price'. We can run:",
"_____no_output_____"
]
],
[
[
"w_o_n.concordance(\"price\")",
"_____no_output_____"
]
],
[
[
"This gives us context for all fo the occurrences of the terms. Context is useful, but there's more than one kind. What about *where* in a text references to different ideas or terms appear? We can do that with *text dispersion plot*, as shown below for a selection of terms.",
"_____no_output_____"
]
],
[
[
"w_o_n.dispersion_plot([\"price\", \"labour\", \"production\", \"America\"])",
"_____no_output_____"
]
],
[
[
"## Stemming and Lemmatisation\n\nYou may have wondered, in these examples, what about words that mean the same but have different endings, for example \"work\", \"working\", \"worked\", and \"works\"? In most of the examples shown, we've only counted one of these words and thereby could *underestimate* their prescence. If what we really want to do is capture all discussion of a topic like 'work', we should really be counting every variation on the word representing that topic.\n\n*Stemming* is a way to do this because it takes the stem of all of these words, in this example \"work\", and then counts the stems. It's true that this is sometimes a bit more nonsensical than using the original word (think \"sci\" for \"science\", \"scientist\", and \"scientific\") but it can give a more accurate take on the occurrence of a term.\n\n\n**nltk** includes more than one stemmer to reduce words to their roots. Let's see what happens when we take the tokenised words and stem them.",
"_____no_output_____"
]
],
[
[
"from nltk import LancasterStemmer\n\n# create an object of class LancasterStemmer\nlancaster = LancasterStemmer()\n\ncleaner_text = raw_text.translate(translator).lower()\n\nstem_tokens = [lancaster.stem(term.lower()) for term in word_tokenize(cleaner_text)\n if term.lower() not in stopwords]\nstem_tokens[120:135]",
"_____no_output_____"
]
],
[
[
"Now we have \"pric\" instead of price, and \"compon\" instead of \"compnonent\", and so on. The stemming has taken away the ends of the words, leaving us with just their stem. Let's see if a word count following this approach will be different.",
"_____no_output_____"
]
],
[
[
"freq = Counter(stem_tokens)\nfreq.most_common(10)",
"_____no_output_____"
]
],
[
[
"In this case, the words that are most frequent are much the same: but you can imagine this could easily have *not* been the case and, if you're interested in fully capturing a topic, it's a good idea to at least check a stemmed version for comparison.",
"_____no_output_____"
],
[
"*Lemmatisation* is slightly different; it's a bit more intelligent than just chopping off the end of the word because it considers context and converts a word to a base form, called a lemma. Let's perform the same exercise using lemmatisation.",
"_____no_output_____"
]
],
[
[
"from nltk import WordNetLemmatizer\n\n# create an object of class LancasterStemmer\nwnet_lemma = WordNetLemmatizer()\n\nlemma_tokens = [wnet_lemma.lemmatize(term.lower()) for term in word_tokenize(cleaner_text)\n if term.lower() not in stopwords]\nfreq = Counter(lemma_tokens)\nfreq.most_common(10)",
"_____no_output_____"
]
],
[
[
"The lemmatised words we're dealing with are more *understandable* than in the case of stemming, but note that the top ten most frequent words have changed a little too.",
"_____no_output_____"
],
[
"## Part of Speech Tagging\n\nSentences are made up of verbs, nouns, adjectives, pronouns, and more of the building blocks of language. Sometimes, when you're doing text analysis, it's useful to understand and extract only some so-called parts of speech (or PoS). The NLP tools we've already seen can help us to do that. In the example below, we'll use `pos_tag` to tag the different parts of speech in a sentence of tokenised text. The function returns tuples of '(word, part-of-speech)' that we can print out.\n\n```{note}\nYou may need to run `nltk.download('averaged_perceptron_tagger')` to use the `pos_tag` function.\n```",
"_____no_output_____"
]
],
[
[
"from nltk import pos_tag\n\nexample_sent = \"If we are going to die, let us die looking like a Peruvian folk band.\"\n\npos_tagged_words = pos_tag(word_tokenize(example_sent))\nfor word, pos in pos_tagged_words:\n if(word not in string.punctuation):\n print(f'The word \"{word}\" is a {pos}')",
"_____no_output_____"
]
],
[
[
"**nltk** uses contractions to refer to the different parts of speech: IN is a preposition, PRP a personal pronoun, VBP a verb (in non 3rd person singular present), JJ is an adjective, NN a noun, and so on.\n\nWhen might you actually use PoS tagging? You can imagine thinking about how the use of language is different or has changed across people or institutions. You might be interested in how more active language is being employed to help readers engage more with documents and reports issued by official organisations. You might be interested in removing all words that aren't, for example, nouns before doing some further analysis.",
"_____no_output_____"
],
[
"When it comes to PoS tagging, **nltk** is far from the only option. Another very powerful NLP library, [**spacy**](https://spacy.io/) definitely warrants a mention. Like **nltk**, **spacy** requires you to install add-ons called models to perform extra tasks. To install **spacy**, it's `pip install spacy` and to load the most commonly used model it's `python -m spacy download en_core_web_sm`, both to be run on the command line.\n\nLet's see the same PoS example but in **spacy**",
"_____no_output_____"
]
],
[
[
"import spacy\n\nnlp = spacy.load(\"en_core_web_sm\")\n\ndoc = nlp(example_sent)\n\npos_df = pd.DataFrame([(token.text, token.lemma_, token.pos_, token.tag_) for token in doc],\n columns=[\"text\", \"lemma\", \"pos\", \"tag\"])\npos_df",
"_____no_output_____"
]
],
[
[
"For those brave enough for the pun, **spacy** also has some nifty visualisation tools.",
"_____no_output_____"
]
],
[
[
"from spacy import displacy\n\ndoc = nlp(\"When you light a candle, you also cast a shadow.\")\n\ndisplacy.render(doc, style=\"dep\")",
"_____no_output_____"
]
],
[
[
"## Named Entity Recognition\n\nThis is another NLP tool that helps to pick apart the parts of language, in this case it's a method for extracting all of the entities named in a text, whether they be people, countries, cars, whatever.\n\nLet's see an example.",
"_____no_output_____"
]
],
[
[
"text = \"TAE Technologies, a California-based firm building technology to generate power from nuclear fusion, said on Thursday it had raised $280 million from new and existing investors, including Google and New Enterprise Associates.\"\n\ndoc = nlp(text)\n\ndisplacy.render(doc, style=\"ent\")",
"_____no_output_____"
]
],
[
[
"Pretty impressive stuff, but a health warning that there are plenty of texts that are not quite as clean as this one! As with the PoS tagger, you can extract the named entities in a tabular format for onward use:",
"_____no_output_____"
]
],
[
[
"pd.DataFrame([(ent.text, ent.start_char, ent.end_char, ent.label_) for ent in doc.ents],\n columns=[\"text\", \"start_pos\", \"end_pos\", \"label\"])",
"_____no_output_____"
]
],
[
[
"The table below gives the different label meanings in Named Entity Recognition:\n\n| Label \t| Meaning \t|\n|-------\t|---------------------\t|\n| geo \t| Geographical entity \t|\n| org \t| Organisation \t|\n| per \t| Person \t|\n| gpe \t| Geopolitical entity \t|\n| date \t| Time indicator \t|\n| art \t| Artifact \t|\n| eve \t| Event \t|\n| nat \t| Natural phenomenon \t|\n| money \t| Reference to money amount \t|",
"_____no_output_____"
],
[
"## Readability Statistics\n\nLike them or loathe them, readability statistics are widely used despite what flaws individual approaches may have. Let's take a look at at a package that can compute a wide range of them, [**textstat**](https://github.com/shivam5992/textstat). We'll see what it can do with English, but it supports other languages too. And we won't use all of its measures, just a few of the most well-known.\n\nAs ever, you will need to run `pip install textstat` on the command line if you don't already have this package installed.",
"_____no_output_____"
]
],
[
[
"import textstat\n\ntest_data = (\n \"Playing games has always been thought to be important to \"\n \"the development of well-balanced and creative children; \"\n \"however, what part, if any, they should play in the lives \"\n \"of adults has never been researched that deeply. I believe \"\n \"that playing games is every bit as important for adults \"\n \"as for children. Not only is taking time out to play games \"\n \"with our children and other adults valuable to building \"\n \"interpersonal relationships but is also a wonderful way \"\n \"to release built up tension.\"\n)\n\nstat_func_names = [textstat.flesch_reading_ease,\n textstat.flesch_kincaid_grade,\n textstat.automated_readability_index,\n textstat.dale_chall_readability_score,\n textstat.difficult_words,\n ]\n\ndf = pd.DataFrame([[fn(test_data) for fn in stat_func_names]],\n columns=[fn.__name__ for fn in stat_func_names],\n index=[\"score\"]).T\ndf",
"_____no_output_____"
]
],
[
[
"## See Also\n\nWe've only scratched the surface of NLP here; there are many other libraries and methods out there. A good easy-to-use introductory NLP package that we didn't feature is [**textblob**](https://textblob.readthedocs.io/en/dev/). In terms of methods, we haven't looked at noun phrase extraction or spelling correction-but **textblob** offers both of these.",
"_____no_output_____"
],
[
"## Review\n\nThis chapter has provided an overview of some common methods in natural language processing. If you've worked through this chapter, you should now be comfortable:\n\n- ✅ splitting text into lines or sentences;\n- ✅ tokenising text;\n- ✅ removing stopwords from text;\n- ✅ computing tf-idf matrices and using the vector spaces that they create for simple similarity calculations;\n- ✅ disentangling the different parts of speech, including any named entities;\n- ✅ stemming and lemmatising text; and\n- ✅ computing statistics on the readability of text.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.