markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
2.1.1) Writing To Disk
# Save as a simple Numpy array (.npy) # Run time: <1s np.save("points.npy", points) # Save as compressed NumPy archive (.npz) # Run time: ~5s np.savez_compressed("points.npz", points)
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Note that the above numpy format is missing the keys (UUIDs) of each point.This may not be required in all cases. However, for the sake of comparisonwe also generate a NumPy archive with keys included. We store the UUIDsas integers to save space and for a fair comparison where the optimalstorage method is used in each case. Note however that UUIDs are toolarge to be a standard C type and therefore are stored as an objectarray.
# Generate UUIDs # Run time: ~10s keys = np.array([uuid.uuid4().int for _ in range(len(points))]) # Generate some UUIDs as keys # Save in NumPy format (.npz) # Run time: <1s np.savez("uuid_points.npz", keys=keys, coords=points) # Save in compressed (zip) NumPy format (.npz) # Run time: ~10s np.savez_compressed("uuid_points_compressed.npz", keys=keys, coords=points) # Write to SQLite with SQLiteStore # Run time: ~10m points_sqlite_store = SQLiteStore("points.db") _ = points_sqlite_store.append_many(annotations=(Annotation(Point(x, y)) for x, y in points)) # Load a DictionaryStore into memory by copying from the SQLiteStore # Run time: ~1m 30s points_dict_store = DictionaryStore(Path("points.ndjson")) for key, value in points_sqlite_store.items(): points_dict_store[key] = value # Save as GeoJSON # Run time: ~1m 30s points_sqlite_store.to_geojson("points.geojson") # Save as ndjson # Run time: ~1m 30s # Spec: https://github.com/ndjson/ndjson-spec points_sqlite_store.to_ndjson("points.ndjson")
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.1.2) Points Dataset Statistics Summary| Format | Write Time | Size ||-------------------------------:|-----------:|-------:|| SQLiteStore (.db) | 6m 20s | 893MB || ndjson | 1m 23s | 667 MB || GeoJSON | 1m 42s | 500 MB || NumPy + UUID (.npz) | 0.5s | 165 MB || NumPy + UUID Compressed (.npz) | 31s | 136 MB || NumPy (.npy) | 0.1s | 76 MB || NumPy Compressed (.npz) | 3.3s | 66 MB |Note that the points SQLite database is significantly larger than theNumPy arrays on disk. The numpy array is much more storage efficientpartly because there is no R Tree index or unique identifier (UUID)stored for each point. For a more fair comparison, another NumPy archive(.npz) is created where the keys are stored along with the coordinates.Also note that although the compressed NumPy representation is muchsmaller, it must be decompressed in memeory before it can be used. Theuncompressed versions may be memory mapped if their size exceeds theavailable memory. 2.1.3) Simple Box QueryHere we evaluate the performance of performing a simple box query on thedata. All points which are in the area between 128 and 256 in the x andy coordinates are retrieved. It is assumed that the data is already inmemory for the NumPy formats. In reality this would not the be case forthe first query, all data would have to be read from disk, which is asignifican overhead. However, this cost is amortised across manyqueries. To ensure the fairest possible comparison, it is assumed thatmany queries will be performed, and that this data loading cost innegligable.
box = Polygon.from_bounds(128, 128, 256, 256) # Time numpy numpy_runs = timeit.repeat( ( "where = np.all([" "points[:, 0] > 128," "points[:, 0] < 256," "points[:, 1] > 128," "points[:, 1] < 256" "], 0)\n" "uuids = keys[where]\n" "result = points[where]\n" ), globals={"keys": keys, "points": points, "np": np}, number=1, repeat=10, ) # Time SQLiteStore sqlite_runs = timeit.repeat( "store.query(box)", globals={"store": points_sqlite_store, "box": box}, number=1, repeat=10, ) # Time DictionaryStore dict_runs = timeit.repeat( "store.query(box)", globals={"store": points_dict_store, "box": box}, number=1, repeat=10, ) plot_results( experiments=[dict_runs, sqlite_runs, numpy_runs], title="Points Box Query (5 Million Points)", tick_label=["DictionaryStore", "SQLiteStore", "NumPy Array"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Although the NumPy array is very space efficient on disk, it is not asfast to query as the `SQLiteStore`. The `SQLiteStore` is likely fasterdue to the use of the R tree index. Furthermore, the method used tostore the points in a NumPy array is limited in that it does not useUUIDs, which makes merging two datasets more difficult as the indexes ofpoints no longer uniquely identify them. Additionally, only homogeneousdata such as two-dimentional coordinates can be practically stored inthis way. If the user would like to store variable length datastructures such as polygons, or even mix data types by storing bothpoints and polygons, then using raw NumPy arrays in this way can becomecumbersome and begins to offer little benefit in terms of storageefficient or query performance. 2.1.4) Polygon Query
big_triangle = Polygon( shell=[ (1024, 1024), (1024, 4096), (4096, 4096), (1024, 1024), ] ) # Time SQLiteStore sqlite_runs = timeit.repeat( "store.query(polygon)", globals={"store": points_sqlite_store, "polygon": big_triangle}, number=1, repeat=10, ) # Time DictionaryStore dict_runs = timeit.repeat( "store.query(polygon)", globals={"store": points_dict_store, "polygon": big_triangle}, number=1, repeat=10, ) plot_results( experiments=[dict_runs, sqlite_runs], title="Polygon Query (5 Million Points)", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.2) Cell Boundary Polygons DatasetHere we generate a much larger and more complex polygon dataset. Thisconsistes of a grid of over 5 million generated cell boundary likepolygons.
# Generate a grid of 5 million cell boundary polygons (2237 x 2237) # Run time: ~10m import random random.seed(42) cell_polygons = [ Annotation(geometry=polygon, properties={"class": random.randint(0, 4)}) for polygon in tqdm(cell_grid(size=(2237, 2237), spacing=35), total=2237**2) ]
100%|██████████| 5004169/5004169 [10:04<00:00, 8277.35it/s]
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.2.1) Write To Formats For Comparison
# Write to an SQLiteStore on disk (SSD for recorded times here) # Run time: ~30m cell_sqlite_store = SQLiteStore("cells.db") _ = cell_sqlite_store.append_many(annotations=cell_polygons) # Create a copy as an in memory DictionaryStore # Run time: ~5m cell_dict_store = DictionaryStore() for key, value in tqdm( # Show a nice progress bar cell_sqlite_store.items(), total=len(cell_sqlite_store), leave=False, position=0, ): cell_dict_store[key] = value # Transform into a numpy array # Run Time: ~1m cell_polygons_np = np.array( [np.array(a.geometry.exterior.coords) for a in tqdm(cell_polygons)], dtype=object ) # Create an Nx4 index of (xmin, ymin, xmax, ymax) as a simple spatial # index to speed up the numpy query. # Run time: ~1m min_max_index = np.array( [(*np.min(coords, 0), *np.max(coords, 0)) for coords in cell_polygons_np] ) # Write to GeoJSON # Run time: ~10m cell_dict_store.to_geojson("cells.geojson") # Write to line delimited JSON (ndjson) # Run time: ~10m cell_dict_store.to_ndjson("cells.ndjson") # Zstandard compression of ndjson to demonstrate how well it compresses. # Gzip may also be used but is slower to compress. # Run time: ~1m ! zstd -f -k cells.ndjson -o cells.ndjson.zstd # Zstandard compression of sqlite to demonstrate how well it compresses. # Gzip may also be used but is slower to compress. # Run time: ~20s ! zstd -f -k cells.db -o cells.db.zstd # Write as a pickle (list) # Run time: ~2m with open("cells.pickle", "wb") as fh: pickle.dump(cell_polygons, fh) # Write as a pickle (dict) # Run time: ~15m with open("cells-dict.pickle", "wb") as fh: pickle.dump(cell_dict_store._rows, fh) # Write dictionary store to a pickle # Run time: ~20m with open("cells.pickle", "wb") as fh: pickle.dump(cell_dict_store, fh) # Write as numpy object array (similar to writing out with pickle), # Numpy cannot handle ragged arrays and therefore dtype must be object. # Run time: ~30m np.save("cells.npy", np.asanyarray(cell_polygons_np, dtype=object)) # Create UUIDs, and get the class labels for each cell boundary # Run time: ~2m _uuids = [str(uuid.uuid4) for _ in cell_polygons] _cls = [x.properties["class"] for x in cell_polygons] # Write as NumPy archive (.npz) with uuid and min_max_index # Run time: ~40m np.savez( "cells.npz", uuids=_uuids, polygons=cell_polygons_np, min_max_index=min_max_index, cls=_cls, ) del _uuids, _cls
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.2.2) Time To Write Summary StatisticsThe following is a summary of the time required to write each format todisk and the total disk space occupied by the final output.Note that some of these formats, such as GeoJSON compress well withschemes such as gzip and zstd, reducing the disk space by approximatelyhalf. Statistics for zstd compressed data is also reported below. Itshould be noted that the data must be decompressed to be usable.However, for gzip and zstd, this may be done in a streaming fashion fromdisk.| Format | Write Time | Size ||------------------:|------------:|--------:|| SQLiteStore (.db) |33m 48.4s | 4.9 GB || GeoJSON |11m 32.9s | 8.9 GB || ndjson |9m 0.9s | 8.8 GB || pickle |1m 2.9s | 1.8 GB || zstd (SQLite) |18.2s | 3.7 GB || zstd (ndjson) |43.7s | 3.6 GB || NumPy (.npy) |50.3s | 1.8 GB || NumPy (.npz) |55.3s | 2.6 GB | 2.2.3) Box Query
# Run time: ~5m # Setup xmin, ymin, xmax, ymax = 128, 12, 256, 256 box = Polygon.from_bounds(xmin, ymin, xmax, ymax) # Time DictionaryStore dict_runs = timeit.repeat( "store.query(box)", globals={"store": cell_dict_store, "box": box}, number=1, repeat=3, ) # Time SQLite store sqlite_runs = timeit.repeat( "store.query(box)", globals={"store": cell_sqlite_store, "box": box}, number=1, repeat=3, ) # Plot results plot_results( experiments=[dict_runs, sqlite_runs], title="Box Query (5 Million Polygons)", tick_label=[ "DictionaryStore", "SQLiteStore", ], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.2.4) Polygon Query
# Run Time: 35s # Setup big_triangle = Polygon( shell=[ (1024, 1024), (1024, 4096), (4096, 4096), (1024, 1024), ] ) # Time DictionaryStore dict_runs = timeit.repeat( "store.query(polygon)", globals={"store": cell_dict_store, "polygon": big_triangle}, number=1, repeat=3, ) # Time SQLite store sqlite_runs = timeit.repeat( "store.query(polygon)", globals={"store": cell_sqlite_store, "polygon": big_triangle}, number=1, repeat=3, ) # Plot results plot_results( experiments=[dict_runs, sqlite_runs], title="Polygon Query (5 Million Polygons)", tick_label=[ "DictionaryStore", "SQLiteStore", ], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.2.5) Predicate Query
# Run Time: ~10m # Setup xmin, ymin, xmax, ymax = 128, 12, 256, 256 box = Polygon.from_bounds(xmin, ymin, xmax, ymax) predicate = "props['class'] == 0" # Time DictionaryStore dict_runs = timeit.repeat( "store.query(box, predicate)", globals={"store": cell_dict_store, "box": box, "predicate": predicate}, number=1, repeat=3, ) # Time SQLiteStore sqlite_runs = timeit.repeat( "store.query(box, where=predicate)", globals={"store": cell_sqlite_store, "box": box, "predicate": predicate}, number=1, repeat=3, ) np_stmt = f""" polygons = [ polygon for polygon in tqdm(cell_polygons_np) if np.all([np.max(polygon, 0) >= ({xmin}, {ymin}), np.min(polygon, 0) <= ({xmax}, {ymax})]) ] """ # Time numpy numpy_runs = timeit.repeat( np_stmt, globals={"cell_polygons_np": cell_polygons_np, "np": np, "tqdm": lambda x: x}, number=1, repeat=3, ) # Time shapely shapely_runs = timeit.repeat( "polygons = [box.intersects(ann.geometry) for ann in cell_polygons]", globals={"box": box, "cell_polygons": cell_polygons}, number=1, repeat=3, ) # Time box indexed numpy numpy_index_runs = timeit.repeat( "in_box = np.all(min_max_index[:, :2] <= (xmax, ymax), 1) & np.all(min_max_index[:, 2:] >= (xmin, ymin), 1)\n" "polygons = [p for p, w in zip(cell_polygons, in_box) if w]", globals={ "min_max_index": min_max_index, "xmin": xmin, "ymin": ymin, "xmax": xmax, "ymax": ymax, "np": np, "cell_polygons": cell_polygons, }, number=1, repeat=3, ) # Run Time: ~5s # Plot results plot_results( experiments=[dict_runs, sqlite_runs, numpy_runs, shapely_runs, numpy_index_runs], title="Box Query", tick_label=[ "DictionaryStore", "SQLiteStore", "NumPy\n(Simple Loop)", "Shapely\n(Simple Loop)", "NumPy\n(With Bounds Index)", ], ) plt.xticks(rotation=90) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
2.3) Size vs Approximate Lower BoundHere we calculate an estimated lower bound on file size by finding thethe Shannon entropy of each file. This tells us the theoretical minimumnumber of bits per byte. The lowest lower bound is then used as anestimate of the minimum file size possible to store the annotation data.
# Run Time: ~5m # Files to consider containing keys, geometry, and properties. # Files which are missing keys e.g. cells.pickle are excluded # for a fair comparison. file_names = [ "cells-dicionary-store.pickle", "cells-dict.pickle", "cells.db", "cells.db.zstd", "cells.geojson", "cells.ndjson", "cells.ndjson.zstd", ] def human_readible_bytes(byte_count: int) -> Tuple[int, str]: """Convert bytes to human readble size and suffix.""" for suffix in ["B", "KB", "MB", "GB", "TB"]: if byte_count < 1024: return byte_count, suffix byte_count /= 1024 return byte_count, "PB" def shannon_entropy( fp: Path, sample_size: int = 1e9, # 1GiB stride: int = 7, skip: int = 1e5, # 100KiB ) -> float: """Calculate the Shannon entropy of a file from a sample. The first `skip` bytes are skipped to avoid sampling low entropy (highly ordered) parts which commonly occur at the beginning e.g. headers. Args: fp: File path to calculate entropy of. sample_size: Number of bytes to sample from the file. stride: Number of bytes to skip between samples. skip: Number of bytes to skip before sampling. """ npmmap = np.memmap(Path(fp), dtype=np.uint8, mode="r") values, counts = np.unique( npmmap[int(skip) : int(skip + (sample_size * stride)) : int(stride)], return_counts=True, ) total = np.sum(counts) frequencies = {v: 0 for v in range(256)} for v, x in zip(values, counts): frequencies[v] = x / total frequency_array = np.array(list(frequencies.values())) epsilon = 1e-16 return -np.sum(frequency_array * np.log2(frequency_array + epsilon)) # Find the min across all of the representations for the lowest lower # bound. bytes_lower_bounds = { path: ( shannon_entropy(Path(path)) / 8 * len(np.memmap(path, dtype=np.uint8, mode="r")) ) for path in tqdm([Path(".") / name for name in file_names], position=0, leave=False) } lowest_bytes_lower_bound = min(bytes_lower_bounds.values()) size, suffix = human_readible_bytes(lowest_bytes_lower_bound) print(f"Approximate Lower Bound Size: {size:.2f} {suffix}")
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Plot Results
# Get file sizes file_sizes = {path: path.stat().st_size for path in [Path(".") / name for name in file_names]} # Sort by size file_sizes = {k: v for k, v in sorted(file_sizes.items(), key=lambda x: x[1])} # Plot plt.bar( x=range(len(file_sizes)), height=file_sizes.values(), tick_label=[p.name for p in file_sizes], color=[f"C{i}" for i in range(len(file_sizes))], ) plt.xlabel("File Name") plt.ylabel("Bytes") plt.xticks(rotation=90) plt.hlines( y=lowest_bytes_lower_bound, xmin=-0.5, xmax=len(file_sizes) - 0.5, linestyles="dashed", color="black", label="Approximate Bytes Lower Bound", ) plt.legend() plt.tight_layout() plt.title("Polygon Annotation File Sizes") plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
The SQLite representation (4.9GB) appears to be quite compact comparedwith GeoJSON and ndjson. Although not as compact as a dictionary pickleor Zstandard compressed ndjson, it offers a good compromise betweencompactness and read performance. 3: Extra Bits 3.1) Space SavingA lot of space can be saved by rounding the coordinates to the nearestinteger when storing them. Below we make a copy of the dataset with allcoordinates rounded.
# Run Time: ~50m ! rm integer-cells.db int_cell_sqlite_store = SQLiteStore("integer-cells.db") # We use batches of 1000 to speed up appending batch = {} for key, annotation in tqdm(cell_sqlite_store.items(), total=len(cell_sqlite_store)): geometry = Polygon(np.array(annotation.geometry.exterior.coords).round()) rounded_annotation = Annotation(geometry, annotation.properties) batch[key] = rounded_annotation if len(batch) >= 1000: int_cell_sqlite_store.append_many(batch.values(), batch.keys()) batch = {} _ = int_cell_sqlite_store.append_many(batch.values(), batch.keys())
100%|██████████| 10008338/10008338 [51:00<00:00, 3270.16it/s]
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Here the database size is reduced to 2.9GB, down from 4.9GB.Additionally, when using integer coordinates, the database compressesmuch better. Zstandard can compress to approximately 60% of theoriginal size (and 35% of the floating point coordinatedatabase size). This may be done for archival purposes.
# Run time: ~15s ! zstd -f -k integer-cells.db -o integer-cells.db.zstd
integer-cells.db : 60.58% ( 2.86 GiB => 1.73 GiB, integer-cells.db.zstd)
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
With higher (slower) compression settings the space can be furtherreduced for long term storage.
# Run time: ~20m ! zstd -f -k -19 --long integer-cells.db -o integer-cells.db.19.zstd
integer-cells.db : 51.22% ( 2.86 GiB => 1.47 GiB, integer-cells.db.19.zstd)
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Copyright 2017 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
Synthetic Features and Outliers **Learning Objectives:** * Create a synthetic feature that is the ratio of two other features * Use this new feature as an input to a linear regression model * Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data Let's revisit our model from the previous First Steps with TensorFlow exercise. First, we'll import the California housing data into a *pandas* `DataFrame`: Setup
from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec import matplotlib.pyplot as plt import numpy as np import pandas as pd import sklearn.metrics as metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",") california_housing_dataframe = california_housing_dataframe.reindex( np.random.permutation(california_housing_dataframe.index)) california_housing_dataframe["median_house_value"] /= 1000.0 california_housing_dataframe
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
Next, we'll set up our input function, and define the function for model training:
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of one feature. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(buffer_size=10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels def train_model(learning_rate, steps, batch_size, input_feature): """Trains a linear regression model. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. input_feature: A `string` specifying a column from `california_housing_dataframe` to use as input feature. Returns: A Pandas `DataFrame` containing targets and the corresponding predictions done after training the model. """ periods = 10 steps_per_period = steps / periods my_feature = input_feature my_feature_data = california_housing_dataframe[[my_feature]].astype('float32') my_label = "median_house_value" targets = california_housing_dataframe[my_label].astype('float32') # Create input functions. training_input_fn = lambda: my_input_fn(my_feature_data, targets, batch_size=batch_size) predict_training_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False) # Create feature columns. feature_columns = [tf.feature_column.numeric_column(my_feature)] # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer ) # Set up to plot the state of our model's line each period. plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.title("Learned Line by Period") plt.ylabel(my_label) plt.xlabel(my_feature) sample = california_housing_dataframe.sample(n=300) plt.scatter(sample[my_feature], sample[my_label]) colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)] # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") root_mean_squared_errors = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period, ) # Take a break and compute predictions. predictions = linear_regressor.predict(input_fn=predict_training_input_fn) predictions = np.array([item['predictions'][0] for item in predictions]) # Compute loss. root_mean_squared_error = math.sqrt( metrics.mean_squared_error(predictions, targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, root_mean_squared_error)) # Add the loss metrics from this period to our list. root_mean_squared_errors.append(root_mean_squared_error) # Finally, track the weights and biases over time. # Apply some math to ensure that the data and line are plotted neatly. y_extents = np.array([0, sample[my_label].max()]) weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') x_extents = (y_extents - bias) / weight x_extents = np.maximum(np.minimum(x_extents, sample[my_feature].max()), sample[my_feature].min()) y_extents = weight * x_extents + bias plt.plot(x_extents, y_extents, color=colors[period]) print("Model training finished.") # Output a graph of loss metrics over periods. plt.subplot(1, 2, 2) plt.ylabel('RMSE') plt.xlabel('Periods') plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(root_mean_squared_errors) # Create a table with calibration data. calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) display.display(calibration_data.describe()) print("Final RMSE (on training data): %0.2f" % root_mean_squared_error) return calibration_data
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
Task 1: Try a Synthetic FeatureBoth the `total_rooms` and `population` features count totals for a given city block.But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of `total_rooms` and `population`.In the cell below, create a feature called `rooms_per_person`, and use that as the `input_feature` to `train_model()`.What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lowerthe final RMSE should be.) **NOTE**: You may find it helpful to add a few code cells below so you can try out several different learning rates and compare the results. To add a new code cell, hover your cursor directly below the center of this cell, and click **CODE**.
# # YOUR CODE HERE # california_housing_dataframe["rooms_per_person"] = calibration_data = train_model( learning_rate=0.00005, steps=500, batch_size=5, input_feature="rooms_per_person" )
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
SolutionClick below for a solution.
california_housing_dataframe["rooms_per_person"] = ( california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"]) calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person")
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
Task 2: Identify OutliersWe can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line.Use Pyplot's [`scatter()`](https://matplotlib.org/gallery/shapes_and_collections/scatter.html) to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained in Task 1.Do you see any oddities? Trace these back to the source data by looking at the distribution of values in `rooms_per_person`.
# YOUR CODE HERE
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
SolutionClick below for the solution.
plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.scatter(calibration_data["predictions"], calibration_data["targets"])
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
The calibration data shows most scatter points aligned to a line. The line is almost vertical, but we'll come back to that later. Right now let's focus on the ones that deviate from the line. We notice that they are relatively few in number.If we plot a histogram of `rooms_per_person`, we find that we have a few outliers in our input data:
plt.subplot(1, 2, 2) _ = california_housing_dataframe["rooms_per_person"].hist()
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
Task 3: Clip OutliersSee if you can further improve the model fit by setting the outlier values of `rooms_per_person` to some reasonable minimum or maximum.For reference, here's a quick example of how to apply a function to a Pandas `Series`: clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))The above `clipped_feature` will have no values less than `0`.
# YOUR CODE HERE
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
SolutionClick below for the solution. The histogram we created in Task 2 shows that the majority of values are less than `5`. Let's clip `rooms_per_person` to 5, and plot a histogram to double-check the results.
california_housing_dataframe["rooms_per_person"] = ( california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5)) _ = california_housing_dataframe["rooms_per_person"].hist()
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
To verify that clipping worked, let's train again and print the calibration data once more:
calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person") _ = plt.scatter(calibration_data["predictions"], calibration_data["targets"])
_____no_output_____
Apache-2.0
exercises/.ipynb_checkpoints/synthetic_features_and_outliers-checkpoint.ipynb
Kabongosalomon/Crash-Course-Machine-Learning-
GAN Workflow Engine with the MedNIST DatasetThe MONAI framework can be used to easily design, train, and evaluate generative adversarial networks.This notebook exemplifies using MONAI components to design and train a simple GAN model to reconstruct images of Hand CT scans.Read the [MONAI Mednist GAN Tutorial](https://github.com/Project-MONAI/MONAI/blob/master/examples/notebooks/mednist_GAN_tutorial.ipynb) for details about the network architecture and loss functions.**Table of Contents**1. Setup2. Initialize MONAI Components * Create image transform chain * Create dataset and dataloader * Define generator and discriminator * Create training handlers * Create GanTrainer3. Run Training4. Evaluate Results[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/MONAI/blob/master/examples/notebooks/mednist_GAN_workflow.ipynb) Step 1: Setup Setup environment
%pip install -qU "monai[ignite]" # temporarily need this, FIXME remove when d93c0a6 released %pip install -qU git+https://github.com/Project-MONAI/MONAI#egg=MONAI %pip install -qU matplotlib %matplotlib inline
Note: you may need to restart the kernel to use updated packages.
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Setup imports
import logging import os import shutil import sys import tempfile import IPython import matplotlib.pyplot as plt import torch from monai.apps import download_and_extract from monai.config import print_config from monai.data import CacheDataset, DataLoader from monai.engines import GanKeys, GanTrainer, default_make_latent from monai.handlers import CheckpointSaver, MetricLogger, StatsHandler from monai.networks import normal_init from monai.networks.nets import Discriminator, Generator from monai.transforms import ( AddChannelD, Compose, LoadPNGD, RandFlipD, RandRotateD, RandZoomD, ScaleIntensityD, ToTensorD, ) from monai.utils import set_determinism print_config()
MONAI version: 0.2.0+74.g8e5a53e Python version: 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] Numpy version: 1.19.1 Pytorch version: 1.6.0 Optional dependencies: Pytorch Ignite version: 0.3.0 Nibabel version: NOT INSTALLED or UNKNOWN VERSION. scikit-image version: NOT INSTALLED or UNKNOWN VERSION. Pillow version: 7.2.0 Tensorboard version: NOT INSTALLED or UNKNOWN VERSION. For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir)
/home/bengorman/notebooks/
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Download datasetDownloads and extracts the dataset.The MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).If you use the MedNIST dataset, please acknowledge the source, e.g. https://github.com/Project-MONAI/MONAI/blob/master/examples/notebooks/mednist_tutorial.ipynb.
resource = "https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1" md5 = "0bc7306e7427e00ad1c5526a6677552d" compressed_file = os.path.join(root_dir, "MedNIST.tar.gz") data_dir = os.path.join(root_dir, "MedNIST") download_and_extract(resource, compressed_file, root_dir, md5) hand_dir = os.path.join(data_dir, "Hand") training_datadict = [ {"hand": os.path.join(hand_dir, filename)} for filename in os.listdir(hand_dir) ] print(training_datadict[:5])
[{'hand': '/home/bengorman/notebooks/MedNIST/Hand/003676.jpeg'}, {'hand': '/home/bengorman/notebooks/MedNIST/Hand/006548.jpeg'}, {'hand': '/home/bengorman/notebooks/MedNIST/Hand/002169.jpeg'}, {'hand': '/home/bengorman/notebooks/MedNIST/Hand/004081.jpeg'}, {'hand': '/home/bengorman/notebooks/MedNIST/Hand/004815.jpeg'}]
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Step 2: Initialize MONAI components
logging.basicConfig(stream=sys.stdout, level=logging.INFO) set_determinism(0) device = torch.device("cuda:0")
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Create image transform chainDefine the processing pipeline to convert saved disk images into usable Tensors.
train_transforms = Compose( [ LoadPNGD(keys=["hand"]), AddChannelD(keys=["hand"]), ScaleIntensityD(keys=["hand"]), RandRotateD(keys=["hand"], range_x=15, prob=0.5, keep_size=True), RandFlipD(keys=["hand"], spatial_axis=0, prob=0.5), RandZoomD(keys=["hand"], min_zoom=0.9, max_zoom=1.1, prob=0.5), ToTensorD(keys=["hand"]), ] )
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Create dataset and dataloaderHold data and present batches during training.
real_dataset = CacheDataset(training_datadict, train_transforms) batch_size = 300 real_dataloader = DataLoader(real_dataset, batch_size=batch_size, shuffle=True, num_workers=10) def prepare_batch(batchdata): return batchdata["hand"]
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Define generator and discriminatorLoad basic computer vision GAN networks from libraries.
# define networks disc_net = Discriminator( in_shape=(1, 64, 64), channels=(8, 16, 32, 64, 1), strides=(2, 2, 2, 2, 1), num_res_units=1, kernel_size=5, ).to(device) latent_size = 64 gen_net = Generator( latent_shape=latent_size, start_shape=(latent_size, 8, 8), channels=[32, 16, 8, 1], strides=[2, 2, 2, 1], ) gen_net.conv.add_module("activation", torch.nn.Sigmoid()) gen_net = gen_net.to(device) # initialize both networks disc_net.apply(normal_init) gen_net.apply(normal_init) # define optimizors learning_rate = 2e-4 betas = (0.5, 0.999) disc_opt = torch.optim.Adam(disc_net.parameters(), learning_rate, betas=betas) gen_opt = torch.optim.Adam(gen_net.parameters(), learning_rate, betas=betas) # define loss functions disc_loss_criterion = torch.nn.BCELoss() gen_loss_criterion = torch.nn.BCELoss() real_label = 1 fake_label = 0 def discriminator_loss(gen_images, real_images): real = real_images.new_full((real_images.shape[0], 1), real_label) gen = gen_images.new_full((gen_images.shape[0], 1), fake_label) realloss = disc_loss_criterion(disc_net(real_images), real) genloss = disc_loss_criterion(disc_net(gen_images.detach()), gen) return (genloss + realloss) / 2 def generator_loss(gen_images): output = disc_net(gen_images) cats = output.new_full(output.shape, real_label) return gen_loss_criterion(output, cats)
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Create training handlersPerform operations during model training.
metric_logger = MetricLogger( loss_transform=lambda x: {GanKeys.GLOSS: x[GanKeys.GLOSS], GanKeys.DLOSS: x[GanKeys.DLOSS]}, metric_transform=lambda x: x, ) handlers = [ StatsHandler( name="batch_training_loss", output_transform=lambda x: { GanKeys.GLOSS: x[GanKeys.GLOSS], GanKeys.DLOSS: x[GanKeys.DLOSS], }, ), CheckpointSaver( save_dir=os.path.join(root_dir, "hand-gan"), save_dict={"g_net": gen_net, "d_net": disc_net}, save_interval=10, save_final=True, epoch_level=True, ), metric_logger, ]
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Create GanTrainerMONAI Workflow engine for adversarial learning. The components come together here with the GanTrainer.Uses a training loop based on Goodfellow et al. 2014 https://arxiv.org/abs/1406.266. ```Training Loop: for each batch of data size m 1. Generate m fakes from random latent codes. 2. Update D with these fakes and current batch reals, repeated d_train_steps times. 3. Generate m fakes from new random latent codes. 4. Update generator with these fakes using discriminator feedback.```
disc_train_steps = 5 num_epochs = 50 trainer = GanTrainer( device, num_epochs, real_dataloader, gen_net, gen_opt, generator_loss, disc_net, disc_opt, discriminator_loss, d_prepare_batch=prepare_batch, d_train_steps=disc_train_steps, g_update_latents=True, latent_shape=latent_size, key_train_metric=None, train_handlers=handlers, )
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Step 3: Start Training
trainer.run() IPython.display.clear_output()
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Evaluate ResultsExamine G and D loss curves for collapse.
g_loss = [loss[GanKeys.GLOSS] for loss in metric_logger.loss] d_loss = [loss[GanKeys.DLOSS] for loss in metric_logger.loss] plt.figure(figsize=(12, 5)) plt.semilogy(g_loss, label="Generator Loss") plt.semilogy(d_loss, label="Discriminator Loss") plt.grid(True, "both", "both") plt.legend() plt.show()
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
View image reconstructionsWith random latent codes view trained generator output.
test_img_count = 10 test_latents = default_make_latent(test_img_count, latent_size).to(device) fakes = gen_net(test_latents) fig, axs = plt.subplots(2, (test_img_count // 2), figsize=(20, 8)) axs = axs.flatten() for i, ax in enumerate(axs): ax.axis("off") ax.imshow(fakes[i, 0].cpu().data.numpy(), cmap="gray")
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
Cleanup data directoryRemove directory if a temporary was used.
if directory is None: shutil.rmtree(root_dir)
_____no_output_____
Apache-2.0
examples/notebooks/mednist_GAN_workflow.ipynb
BRAINSia/MONAI
We show how to translate from syntax trees, to pregroup parsing diagrams, to equivalent diagrams where all cups and caps are removed.
from discopy import Ty, Id, Box, Diagram, Word # POS TAGS: s, n, adj, v, vp = Ty('S'), Ty('N'), Ty('ADJ'), Ty('V'), Ty('VP') # WORDS: Jane = Word('Jane', n) loves = Word('loves', v) funny = Word('funny', adj) boys = Word('boys', n) vocab = [Jane, loves, funny, boys]
_____no_output_____
BSD-3-Clause
notebooks/rewriting-grammar.ipynb
dimkart/discopy
Syntax trees
# The CFG's production rules are boxes. R0 = Box('R0', vp @ n, s) R1 = Box('R1', n @ vp, s) R2 = Box('R2', n @ v , vp) R3 = Box('R3', v @ n , vp) R4 = Box('R4', adj @ n, n) # A syntax tree is a diagram! tree0 = R2 @ R4 >> R0 tree1 = Id(n @ v) @ R4 >> Id(n) @ R3 >> R1 sentence0 = Jane @ loves @ funny @ boys >> tree0 sentence1 = Jane @ loves @ funny @ boys >> tree1 print("Two syntax trees for sentence 'Jane loves funny boys':") sentence0.draw(aspect='auto') sentence1.draw(aspect='auto')
Two syntax trees for sentence 'Jane loves funny boys':
BSD-3-Clause
notebooks/rewriting-grammar.ipynb
dimkart/discopy
Pregroup parsing
from discopy.rigid import Cup, Cap, Functor # Dict from POS tags to Pregroup types: ob = {n : n, s: s, adj: n @ n.l, v: n.r @ s @ n.l, vp: s @ n.l} _Jane = Word('Jane', n) _loves = Word('loves', n.r @ s @ n.l) _funny = Word('funny', n @ n.l) _boys = Word('boys', n) # Dict from CFG rules to Pregroup reductions: ar = {R0: Id(s) @ Cup(n.l, n), R2: Cup(n, n.r) @ Id(s @ n.l), R4: Id(n) @ Cup(n.l, n), R3: Id(n.r @ s) @ Cup(n.l, n), R1: Cup(n, n.r) @ Id(s), Jane: _Jane, loves: _loves, funny: _funny, boys: _boys} T2P = Functor(ob, ar) print("The syntax trees are mapped to pregroup diagrams, equivalent up to interchanger:") T2P(sentence0).draw(aspect='auto') T2P(sentence1).draw(aspect='auto') # Check that the two diagrams above are equal up to monoidal.normal_form() assert T2P(sentence0).normal_form() == T2P(sentence0).normal_form()
_____no_output_____
BSD-3-Clause
notebooks/rewriting-grammar.ipynb
dimkart/discopy
Snake removal
# Define the Wiring functor that decomposes a word into monoidal boxes with inputs transposed: love_box = Box('loves', n @ n, s) funny_box = Box('funny', n, n) ob = {n: n, s: s} ar = {_Jane: _Jane, _boys: _boys, _loves: Cap(n.r, n) @ Cap(n, n.l) >> Diagram.id(n.r) @ love_box @ Diagram.id(n.l), _funny: Cap(n, n.l) >> funny_box @ Id(n.l)} W = Functor(ob, ar) print('Image of the wiring functor:') W(T2P(sentence0)).draw(aspect='auto') rewrite_steps = W(T2P(sentence0)).normalize() print("Equivalent diagram for 'Jane loves funny boys', after snake removal:") T2P(sentence0).to_gif(*rewrite_steps, path='../docs/imgs/jane_boys.gif', aspect='auto')
Equivalent diagram for 'Jane loves funny boys', after snake removal:
BSD-3-Clause
notebooks/rewriting-grammar.ipynb
dimkart/discopy
# Download and unpack archive with CIFAR10 dataset to disk from official site: https://www.cs.toronto.edu/~kriz/cifar.html !wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz !tar -xzf cifar-10-python.tar.gz !ls -l # Loading CIFAR10 data using code from official site: https://www.cs.toronto.edu/~kriz/cifar.html import numpy as np import random def unpickle(file,encoding = 'bytes'): import pickle with open(file, 'rb') as fo: dict = pickle.load(fo, encoding=encoding) return dict def load_train_data(): x = np.zeros((0,3072),dtype=int) # To avoid overflow y = np.array([],dtype=int) for i in range(1,6): raw = unpickle(f"/content/cifar-10-batches-py/data_batch_{i}") x = np.append(x,np.array(raw[b'data'],dtype=int),axis=0) y = np.append(y,np.array(raw[b'labels'],dtype=int),axis=0) return x,y x_train, y_train = load_train_data() test = unpickle("/content/cifar-10-batches-py/test_batch") x_test = np.array(test[b'data']) y_test = np.array(test[b'labels']) # Load label names. For for convenience only. meta = unpickle("/content/cifar-10-batches-py/batches.meta",'utf-8') labels= meta['label_names'] class NearestNeighbor: def __init__(self): pass def train(self,x,y): self.train_data = x self.train_labels = y def predict(self,x): # To avoid overflow data must be int, not a byte! distances = np.sum(np.abs(self.train_data - x),axis = 1) # Axis 0 it's a row num in image list return self.train_labels[np.argmin(distances)] # Function to check model accuracy def validate(model,x_test,y_test): correct = 0 for i, sample in enumerate(x_test): index = model.predict(sample) correct += 1 if index == y_test[i] else 0 if i > 0 and i % 100 == 0: print ("Accuracy {:.3f}".format(correct/i)) return correct/len(x_test) # Now test accuracy and speed of model import time nn = NearestNeighbor() nn.train(x_train,y_train) start = time.perf_counter() accuracy = validate(nn,x_test[:100],y_test[:100]) tm = time.perf_counter() - start total = x_test.shape[0] print("Accuracy {:.2f} Train {:d} /test {:d} in {:.1f} sec. speed {:.2f} samples per second.".format(accuracy,len(x_train),total,tm,total/tm,) ) # KNN from collections import Counter class KNearestNeighbor(NearestNeighbor): def __init__(self,k): self.k = k pass def predict(self,x): distances = np.sum(np.abs(self.train_data - x),axis = 1) # L1 sorted_distance_indexes = np.argsort(distances) k_nearest_images = sorted_distance_indexes[:self.k] most_common = Counter(self.train_labels[k_nearest_images]).most_common() return most_common[0][0] knn = KNearestNeighbor(11) knn.train( x_train,y_train) validate(knn,x_test[:1000],y_test[:1000])
Accuracy 0.380 Accuracy 0.410 Accuracy 0.373 Accuracy 0.375 Accuracy 0.380 Accuracy 0.383
MIT
extra/KNN_demo.ipynb
Gan4x4/hse-cv2019
**LetsGrowMore**------ ***Data Science Internship***------ `Author: UMER FAROOQ` `Task Level: Beginner Level` `Task Number: 1` `Task Title: Iris Flower Classification` `Language: Python` `IDE: Google Colab` **Steps**: **Step:1*****Importing Libraries***
import numpy as np import pandas as pd import seaborn as sns sns.set_palette('husl') import matplotlib.pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.model_selection import StratifiedKFold from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
**Step:2*****Loading the Dataset:*** I have pick the dataset from the following link. You can download the dataset as well as you can use by URL.
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv' # Creating the list of column name: column_name = ['sepal-lenght','sepal-width','petal-lenght','petal-width','class'] # Pandas read_csv() is used for reading the csv file: dataset = pd.read_csv(url, names = column_name)
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
**Step:3*****Dataset Summarizing:*** Check the structure/shape of data on which we have to work on.
dataset.shape
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
This shows that we have: 1. 150 rows,2. 5 columns.Thats enough for our Beginner Project.*Displaying the First 5 records:*
dataset.head()
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
Pandas info() method prints information about a DataFrame such as datatypes, cols, NAN values and usage of memory:
dataset.info() dataset.isnull() # Returns no. of missing records/values dataset.isnull().sum() """ Pandas describe() is used to view some basic statistical details like percentile, mean, std etc. of a data frame or a series of numeric values: """ dataset.describe()
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
**Now let’s check the number of rows that belongs to each class:**
dataset['class'].value_counts() # No of records/samples in each class
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
The above outputs shows that each class of flowers has 50 rows. **Step: 4 Data Visualization**------Data visualization is the process of translating large data sets and metrics into charts, graphs and other visuals.------**Violin Plot:** Plotting the violin plot to check the comparison of a variable distribution:
sns.violinplot(y='class', x='sepal-lenght', data=dataset, inner='quartile') plt.show() print('\n') sns.violinplot(y='class', x='sepal-width', data=dataset, inner='quartile') plt.show() print('\n') sns.violinplot(y='class', x='petal-lenght', data=dataset, inner='quartile') plt.show() print('\n') sns.violinplot(y='class', x='petal-width', data=dataset, inner='quartile') plt.show() print('\n')
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
Above-plotted violin plot says that Iris-Setosa class is having a smaller petal length and petal width as compared to other class. **Pair Plot:** Plotting multiple pairwise bivariate distributions in a dataset using pairplot:
sns.pairplot(dataset, hue='class', markers='+') plt.show()
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
From the above, we can see that Iris-Setosa is separated from both other species in all the features. **Heatmap:** Plotting the heatmap to check the correlation.**dataset.corr()** is used to find the pairwise correlation of all columns in the dataframe.
plt.figure(figsize=(8,5)) sns.heatmap(dataset.corr(), annot=True, cmap= 'PuOr') plt.show()
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
**Step: 5 Model Construction (Splitting, Training and Model Creation)**------**SPLITTING THE DATASET:**X have dependent variables.Y have an independent variables.
x= dataset.drop(['class'], axis=1) y= dataset['class'] # Class is an independent variable print('X shape: {}\nY Shape: {}'.format(x.shape, y.shape))
X shape: (150, 4) Y Shape: (150,)
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
The output shows that X has 150 records/rows and 4 cols, whereas, Y has 150 records and only 1 col. **TRAINING THE TEST SPLIT:**Splitting our dataset into train and test using train_test_split(), what we are doing here is taking 80% of data to train our model, and 20% that we will hold back as a validation dataset:
x_train, x_test, y_train, y_test = train_test_split (x, y, test_size=0.20, random_state=1)
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
**MODEL CONSTRUCTION PART:1:**We have no idea which algorithms might work best in this situation.Let's run each algorithm in a loop and print its accuracy so we can choose the best one. Following are the algorithms:1. Logistic Regression (LR)2. Linear Discriminant Analysis (LDA)1. K-Nearest Neighbors (KNN).2. Classification and Regression Trees (CART).1. Gaussian Naive Bayes (NB).2. Support Vector Machines (SVM).
models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVC', SVC(gamma='auto'))) # evaluate each model in turn results = [] model_names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, x_train, y_train, cv=kfold, scoring='accuracy') results.append(cv_results) model_names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std()))
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) /usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) /usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) /usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) /usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
***Support Vector Classifier (SVC) is performing better than other algorithms.Let’s train SVC model on our training set and predict on test set in the next step.*** **MODEL CONSTRUCTION PART:2**We are defining our SVC model and passing gamma as auto.After that fitting/training the model on X_train and Y_train using .fit() method.Then we are predicting on X_test using .predict() method.
model = SVC(gamma='auto') model.fit(x_train, y_train) prediction = model.predict(x_test)
_____no_output_____
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
**Now checking the accuracy of the model using accuracy_score(y_test, prediction).**y_test is actually values of x_testprediction: it predicts values of x_test as mentioned earlier (*then we are predicting on X_test using .predict() method*).**Printing out the classfication report using:** classification_report(y_test, prediction)
print(f"Test Accuracy: {accuracy_score(y_test, prediction)} \n") print(f'Classification Report:\n \n {classification_report(y_test, prediction)}')
Test Accuracy: 0.9666666666666667 Classification Report: precision recall f1-score support Iris-setosa 1.00 1.00 1.00 11 Iris-versicolor 1.00 0.92 0.96 13 Iris-virginica 0.86 1.00 0.92 6 accuracy 0.97 30 macro avg 0.95 0.97 0.96 30 weighted avg 0.97 0.97 0.97 30
MIT
TASK#1.ipynb
Umer86/IRIS_FLOWER_Classfication_ML_Project
Estadística Es la rama de las matemáticas que estudia la variabilidad, así como el proceso aleatorio que la genera siguiendo leyes de probabilidad.La estadística es útil para una amplia variedad de ciencias empíricas (la que entiende los hechos creando representaciones de la realidad), desde la física hasta las ciencias sociales, desde las ciencias de la salud hasta el control de calidad. Además, se usa en áreas de negocios o instituciones gubernamentales con el objetivo de describir el conjunto de datos obtenidos para la toma de decisiones, o bien para realizar generalizaciones sobre las características observadas. La estadística se divide en dos grandes áreas:- **Estadística descriptiva**: Se dedica a la descripción, visualización y resumen de datos originados a partir de los fenómenos de estudio. Los datos pueden ser resumidos numérica o gráficamente. Su objetivo es organizar y describir las características sobre un conjunto de datos con el propósito de facilitar su aplicación, generalmente con el apoyo de gráficas, tablas o medidas numéricas. - Ejemplos básicos de parámetros estadísticos son: la media y la desviación estándar. - Ejemplos gráficos son: histograma, pirámide poblacional, gráfico circular, entre otros.- **Estadística inferencial**: Se dedica a la generación de los modelos, inferencias y predicciones asociadas a los fenómenos en cuestión teniendo en cuenta la aleatoriedad de las observaciones. Se usa para modelar patrones en los datos y extraer inferencias acerca de la población bajo estudio. Estas inferencias pueden tomar la forma de respuestas a preguntas sí/no (prueba de hipótesis), estimaciones de unas características numéricas (estimación), pronósticos de futuras observaciones, descripciones de asociación (correlación) o modelamiento de relaciones entre variables (análisis de regresión). Otras técnicas de modelamiento incluyen análisis de varianza, series de tiempo y minería de datos. Su objetivo es obtener conclusiones útiles para lograr hacer deducciones acerca de la totalidad de todas las observaciones hechas, basándose en la información numérica. Sampling![alt text](https://upload.wikimedia.org/wikipedia/commons/b/bf/Simple_random_sampling.PNG)One reason we need statistics is because we'd like to make statements about a general population based only on a subset - a *sample* - of the data. This can be for practical reasons, to reduce costs, or can be inherently necessary because of the nature of the problem. For example, it's not possible to collect data on "everyone who ever had a headache", so people wanting to study headaches will have to somehow get a group of people and try to generalize based on that.What's the right way to build that group? If a drug company decides to only ask its employees if their headache drug works, does that have any problems?*Yes* - let's discuss!
import random population = range(100) sample = random.sample(population, 10) #sample crea una muestra aleatorio dentro del rango print(sample)
[2, 78, 40, 59, 13, 31, 24, 43, 29, 68]
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
--> Tarea Crear población a partir de lista de alturas de alumnos y con clases Conceptos básicos de la estadística descriptivaEn *[estadística descriptiva](https://es.wikipedia.org/wiki/Estad%C3%ADstica_descriptiva)* se utilizan distintas medidas para intentar describir las propiedades de nuestros datos, algunos de los conceptos básicos, son:* **Media aritmética**: La [media aritmética](https://es.wikipedia.org/wiki/Media_aritm%C3%A9tica) es el valor obtenido al sumar todos los *[datos](https://es.wikipedia.org/wiki/Dato)* y dividir el resultado entre el número total elementos. Se suele representar con la letra griega $\mu$. Si tenemos una [muestra](https://es.wikipedia.org/wiki/Muestra_estad%C3%ADstica) de $n$ valores, $x_i$, la *media aritmética*, $\mu$, es la suma de los valores divididos por el numero de elementos; en otras palabras:$$\mu = \frac{1}{n} \sum_{i}x_i$$* **Desviación respecto a la media**: La desviación respecto a la media es la diferencia en valor absoluto entre cada valor de la variable estadística y la media aritmética.$$D_i = |x_i - \mu|$$* **Varianza**: La [varianza](https://es.wikipedia.org/wiki/Varianza) es la media aritmética del cuadrado de las desviaciones respecto a la media de una distribución estadística. La varianza intenta describir la dispersión de los *[datos](https://es.wikipedia.org/wiki/Dato). Básicamente representa lo que varían los datos.* Se representa como $\sigma^2$. $$\sigma^2 = \frac{\sum\limits_{i=1}^n(x_i - \mu)^2}{n} $$* **Desviación típica**: La [desviación típica](https://es.wikipedia.org/wiki/Desviaci%C3%B3n_t%C3%ADpica) es la raíz cuadrada de la varianza. Se representa con la letra griega $\sigma$.$$\sigma = \sqrt{\frac{\sum\limits_{i=1}^n(x_i - \mu)^2}{n}} $$* **Moda**: La moda es el valor que tiene mayor frecuencia absoluta. Se representa con $M_0$* **Mediana**: La mediana es el valor que ocupa el lugar central de todos los datos cuando éstos están ordenados de menor a mayor. Se representa con $\widetilde{x}$.* **Correlación**: La [correlación](https://es.wikipedia.org/wiki/Correlaci%C3%B3n) trata de establecer la relación o dependencia que existe entre las dos variables que intervienen en una distribución bidimensional. Es decir, determinar si los cambios en una de las variables influyen en los cambios de la otra. En caso de que suceda, diremos que las variables están correlacionadas o que hay correlación entre ellas. La correlación es positiva cuando los valores de las variables aumenta juntos; y es negativa cuando un valor de una variable se reduce cuando el valor de la otra variable aumenta.* **Covarianza**: La [covarianza](https://es.wikipedia.org/wiki/Covarianza) es el equivalente de la varianza aplicado a una variable bidimensional. Es la media aritmética de los productos de las desviaciones de cada una de las variables respecto a sus medias respectivas.La covarianza indica el sentido de la correlación entre las variables; Si $\sigma_{xy} > 0$ la correlación es directa; Si $\sigma_{xy} < 0$ la correlación es inversa.$$\sigma_{xy} = \frac{\sum\limits_{i=1}^n(x_i - \mu_x)(y_i -\mu_y)}{n}$$* **Valor atípico (Outlier)**: Un [valor atípico](https://es.wikipedia.org/wiki/Valor_at%C3%ADpico) es una observación que se aleja demasiado de la moda; esta muy lejos de la tendencia principal del resto de los *[datos](https://es.wikipedia.org/wiki/Dato)*. Pueden ser causados por errores en la recolección de *[datos](https://es.wikipedia.org/wiki/Dato)* o medidas inusuales. Generalmente se recomienda eliminarlos del [conjunto de datos](https://es.wikipedia.org/wiki/Conjunto_de_datos). 1. https://towardsdatascience.com/a-quick-guide-on-descriptive-statistics-using-pandas-and-seaborn-2aadc7395f322. https://www.tutorialspoint.com/python_pandas/python_pandas_descriptive_statistics.htm Ejemplos en PythonCalcular los principales indicadores de la *[estadística descriptiva](https://es.wikipedia.org/wiki/Estad%C3%ADstica_descriptiva)* con [Python](http://python.org/) es muy fácil!.
import numpy as np #np y pd es un alias. son librerias de estadistica y tratamientos de datos import pandas as pd import matplotlib.pyplot as pyplot #se instalan en terminal #pip3 install numpy from numpy import * #importo las funciones de numpy a jupiter y se puede usar sin np. #no se recomienda lista = [2, 4, 6, 8] print(type(lista)) lista_array = np.asarray(lista) #numpy trabaja con arrays. cambio de lista a array con asarray(lista) ARRAY tiene sus propios metodos incluidos en np #el equivalente a max(lista) q nos da el numero mas alto es lista_array.argmax() #list(lista_array) paso de array a lista print(type(lista_array)) np.arange() from scipy import unique, stats # Ejemplos de estadistica descriptiva con python import numpy as np # importando numpy from scipy import stats # importando scipy.stats import pandas as pd # importando pandas np.random.seed(4) # para poder replicar el random. fichero semilla seed lista = [2., 1.76, 1.8, 1.6] a = np.array(lista) a datos = np.random.randn(5, 4) # datos normalmente distribuidos. 5 filas y 4 columnas datos #ya es un array de numpy # media arítmetica datos.mean() # Calcula la media aritmetica de los datos. no le alado np pq es un array q ya es de numpy x = np.mean(datos) # Mismo resultado desde la funcion de numpy x datos datos.mean(axis=1) # media aritmetica de cada fila datos.mean(axis=0) # media aritmetica de cada columna # mediana np.median(datos) np.median(datos, axis=0) # media aritmetica de cada columna import numpy as np # Desviación típica np.std(datos) type(datos) datos.std() np.std(datos, 0) # Desviación típica de cada columna lista = [1, 3, 5, 7] np.mean(a=lista) x = np.array(lista) x x.mean() # varianza np.var(datos) datos.var() np.var(datos, 0) # varianza de cada columna # moda stats.mode(datos) # Calcula la moda de cada columna # el 2do array devuelve la frecuencia. datos2 = np.array([1, 2, 3, 6, 6, 1, 2, 4, 2, 2, 6, 6, 8, 10, 6]) from scipy import stats # importando scipy.stats stats.mode(datos2) # aqui la moda es el 6 porque aparece 5 veces en el vector. # correlacion np.corrcoef(datos) # Crea matriz de correlación. # calculando la correlación entre dos vectores. np.corrcoef(datos[0], datos[1]) # covarianza np.cov(datos) # calcula matriz de covarianza # covarianza de dos vectores np.cov(datos[0], datos[1]) datos import pandas as pd # usando pandas dataframe = pd.DataFrame(datos, index=['a', 'b', 'c', 'd', 'e'], columns=['col1', 'col2', 'col3', 'col4']) dataframe dataframe["col1"].values # resumen estadistadistico con pandas dataframe.describe() lista = [2., 1.76, 1.8, 1.6] lista_array = np.array(lista) print(lista_array) dataframe2 = pd.DataFrame(lista_array, index=['a', 'b', 'c', 'd'], columns=['col1']) dataframe2 dataframe # sumando las columnas dataframe.sum() # sumando filas dataframe.sum(axis=1) dataframe.cumsum() # acumulados # media aritmética de cada columna con pandas dataframe.mean() # media aritmética de cada fila con pandas dataframe.mean(axis=1) altura1 = [1.78, 1.63, 1.75, 1.68] altura2 = [2.00, 1.82, 1.76, 1.66] altura3 = [1.65, 1.73, 1.75, 1.76] altura4 = [1.72, 1.71, 1.71, 1.62] lista_alturas = [altura1, altura2, altura3, altura4] class Humano(): def __init__(self, altura): self.altura = altura def crece(self): self.altura = self.altura + 2.0 lista_humanos = [] for col_alt in lista_alturas: for alt in col_alt: humano = Humano(altura=alt) lista_humanos.append(humano) lista_humanos for humano in lista_humanos: print(humano.altura) humano.crece() print(humano.altura) print("----------") lista_alturas x = np.arange(0, 15) x lista = [2, 5, 7] print(sum(lista)) import numpy as np def axis_x(limit): return np.arange(0, limit) total_elements_x = axis_x(limit=sum([len(x) for x in lista_alturas])) total_elements_x lista_alturas_total = [] for humano in lista_humanos: lista_alturas_total.append(humano.altura) array_alturas_completo = np.array(lista_alturas_total).T array_alturas_completo array_alturas_completo.shape import pandas as pd df = pd.DataFrame({'Alturas':array_alturas_completo}) df import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.plot(lista_alturas_alumnos, linewidth=2, marker='o', color="g", linestyle='dashed', markersize=12) plt.ylabel("Alturas") plt.show() import mi_libreria as ml ml.grafica_verde(lista=lista_alturas_alumnos, ylabel="Alturas 2") def create_array_with_same_value(value, limit): return np.full(limit, value) a = create_array_with_same_value(2, 80) a a = np.arange(4) a media = np.mean(lista_alturas_alumnos) media_graph = create_array_with_same_value(value=media, limit=len(lista_alturas_alumnos)) print(lista_alturas_alumnos) media_graph print(len(lista_alturas_alumnos)) print(len(media_graph)) # Añadimos la media a la gráfica plt.plot(lista_alturas_alumnos, "ro") media = np.mean(lista_alturas_alumnos) print("Media:", media) media_graph = create_array_with_same_value(value=media, limit=len(lista_alturas_alumnos)) plt.ylabel("Alturas") plt.plot(media_graph, "b--") plt.show() lista_alturas_total = np.array(lista_alturas_total) std = np.std(lista_alturas_total) std std_superior = media + std std_inferior = media - std plt.plot(lista_alturas_total, "ro") media = np.mean(lista_alturas_total) print("Media:", media) std_superior_total = create_array_with_same_value(value=std_superior, limit=numero_de_alturas) std_inferior_total = create_array_with_same_value(value=std_inferior, limit=numero_de_alturas) plt.ylabel("Alturas") plt.plot(std_superior_total, "b--") plt.plot(std_inferior_total, "g--") plt.show() lista = [[[[2,4,6,8], [5,6,7,8]]]] lista_array = np.array(lista) lista_array lista_array.shape arrays_alturas = np.array([np.array(x) for x in lista_alturas]) arrays_alturas np.mean(arrays_alturas) np.mean(arrays_alturas, 1) # Fila np.mean(arrays_alturas, 0) # Columna lista = [2, 2, 2, 4, 6, 10, 10] 2 --> 3 4 --> 1 6 --> 1 10 -> 2 import pandas as pd import numpy as np df = pd.DataFrame({'Alturas':np.array(lista_alturas_alumnos)}) df df.hist(bins=10)
_____no_output_____
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
Histogramas y DistribucionesMuchas veces los indicadores de la *[estadística descriptiva](https://es.wikipedia.org/wiki/Estad%C3%ADstica_descriptiva)* no nos proporcionan una imagen clara de nuestros *[datos](https://es.wikipedia.org/wiki/Dato)*. Por esta razón, siempre es útil complementarlos con gráficos de las distribuciones de los *[datos](https://es.wikipedia.org/wiki/Dato)*, que describan con qué frecuencia aparece cada valor. La representación más común de una distribución es un [histograma](https://es.wikipedia.org/wiki/Histograma), que es un gráfico que muestra la frecuencia o probabilidad de cada valor. El [histograma](https://es.wikipedia.org/wiki/Histograma) muestra las frecuencias como un gráfico de barras que indica cuan frecuente un determinado valor ocurre en el [conjunto de datos](https://es.wikipedia.org/wiki/Conjunto_de_datos). El eje horizontal representa los valores del [conjunto de datos](https://es.wikipedia.org/wiki/Conjunto_de_datos) y el eje vertical representa la frecuencia con que esos valores ocurren.Las distribuciones se pueden clasificar en dos grandes grupos:1. Las **[distribuciones continuas](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_probabilidad_continua)**, que son aquellas que presentan un número infinito de posibles soluciones. Dentro de este grupo vamos a encontrar a las distribuciones: * [normal](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal), * [gamma](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_gamma), * [chi cuadrado](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_%CF%87%C2%B2), * [t de Student](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_t_de_Student), * [pareto](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Pareto), * entre otras2. Las **distribuciones discretas**, que son aquellas en las que la variable puede pude tomar un número determinado de valores. Los principales exponenetes de este grupo son las distribuciones: * [poisson](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Poisson), * [binomial](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_binomial), * [hipergeométrica](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_hipergeom%C3%A9trica), * [bernoulli](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Bernoulli) * entre otrasVeamos algunos ejemplos graficados con la ayuda de [Python](http://python.org/).
# Histogram df.hist()
_____no_output_____
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
Distribución normalLa [distribución normal](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal) es una de las principales distribuciones, ya que es la que con más frecuencia aparece aproximada en los fenómenos reales. Tiene una forma acampanada y es simétrica respecto de un determinado parámetro estadístico. Con la ayuda de [Python](http://python.org/) la podemos graficar de la siguiente manera:
# Graficos embebidos. %matplotlib inline import matplotlib.pyplot as plt # importando matplotlib import seaborn as sns # importando seaborn # parametros esteticos de seaborn sns.set_palette("deep", desat=.6) sns.set_context(rc={"figure.figsize": (8, 4)}) mu, sigma = 0, 0.1 # media y desvio estandar s = np.random.normal(mu, sigma, 1000) #creando muestra de datos # histograma de distribución normal. cuenta, cajas, ignorar = plt.hist(s, 30, normed=True) normal = plt.plot(cajas, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (cajas - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r')
_____no_output_____
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
Distribuciones simétricas y asimétricasUna distribución es simétrica cuando moda, mediana y media coinciden aproximadamente en sus valores. Si una distribución es simétrica, existe el mismo número de valores a la derecha que a la izquierda de la media, por tanto, el mismo número de desviaciones con signo positivo que con signo negativo.Una distribución tiene [asimetria](https://es.wikipedia.org/wiki/Asimetr%C3%ADa_estad%C3%ADstica) positiva (o a la derecha) si la "cola" a la derecha de la media es más larga que la de la izquierda, es decir, si hay valores más separados de la media a la derecha. De la misma forma una distribución tiene [asimetria](https://es.wikipedia.org/wiki/Asimetr%C3%ADa_estad%C3%ADstica) negativa (o a la izquierda) si la "cola" a la izquierda de la media es más larga que la de la derecha, es decir, si hay valores más separados de la media a la izquierda.Las distribuciones asimétricas suelen ser problemáticas, ya que la mayoría de los métodos estadísticos suelen estar desarrollados para distribuciones del tipo [normal](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal). Para salvar estos problemas se suelen realizar transformaciones a los datos para hacer a estas distribuciones más simétricas y acercarse a la [distribución normal](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal).
# Dibujando la distribucion Gamma x = stats.gamma(3).rvs(5000) gamma = plt.hist(x, 70, histtype="stepfilled", alpha=.7)
_____no_output_____
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
En este ejemplo podemos ver que la [distribución gamma](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_gamma) que dibujamos tiene una [asimetria](https://es.wikipedia.org/wiki/Asimetr%C3%ADa_estad%C3%ADstica) positiva.
# Calculando la simetria con scipy stats.skew(x)
_____no_output_____
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
Cuartiles y diagramas de cajasLos **[cuartiles](https://es.wikipedia.org/wiki/Cuartil)** son los tres valores de la variable estadística que dividen a un [conjunto de datos](https://es.wikipedia.org/wiki/Conjunto_de_datos) ordenados en cuatro partes iguales. Q1, Q2 y Q3 determinan los valores correspondientes al 25%, al 50% y al 75% de los datos. Q2 coincide con la mediana.Los [diagramas de cajas](https://es.wikipedia.org/wiki/Diagrama_de_caja) son una presentación visual que describe varias características importantes al mismo tiempo, tales como la dispersión y simetría. Para su realización se representan los tres cuartiles y los valores mínimo y máximo de los datos, sobre un rectángulo, alineado horizontal o verticalmente. Estos gráficos nos proporcionan abundante información y son sumamente útiles para encontrar [valores atípicos](https://es.wikipedia.org/wiki/Valor_at%C3%ADpico) y comparar dos [conjunto de datos](https://es.wikipedia.org/wiki/Conjunto_de_datos).
lista = [2, 3, 4, 5,6,7,9,10,11, 1000] import matplotlib.pyplot as plt # Ejemplo de grafico de cajas en python datos_1 = np.random.normal(100, 10, 200) #datos_2 = np.random.normal(80, 30, 200) datos_3 = np.random.normal(90, 20, 200) datos_4 = np.random.normal(70, 25, 200) datos_graf = [datos_1, datos_2, datos_3, datos_4] # Creando el objeto figura fig = plt.figure(1, figsize=(9, 6)) # Creando el subgrafico ax = fig.add_subplot(111) # creando el grafico de cajas bp = ax.boxplot(datos_graf) # visualizar mas facile los atípicos for flier in bp['fliers']: flier.set(marker='o', color='red', alpha=0.5) # los puntos aislados son valores atípicos df = pd.DataFrame(datos_2) df es_menor_a_80 = 0 for value in datos_2: if value <= 80: es_menor_a_80 += 1 print(es_menor_a_80) df.hist(bins=2) x = list(datos_2) x.append(500) datos_2 = np.array(x) df = pd.DataFrame(datos_2) df.hist() # usando seaborn sns.boxplot(datos_graf, names=["grupo1", "grupo2", "grupo3", "grupo 4"], color="PaleGreen");
_____no_output_____
Apache-2.0
week3/day3/theory/statistics.ipynb
Marvxeles/mxrodvi
Data Aggregation and Group Operations
import numpy as np import pandas as pd PREVIOUS_MAX_ROWS = pd.options.display.max_rows pd.options.display.max_rows = 20 np.random.seed(12345) import matplotlib.pyplot as plt plt.rc('figure', figsize=(10, 6)) np.set_printoptions(precision=4, suppress=True)
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
10.1. GroupBy Mechanics 10.1.0 To get started, here is a small tabular dataset as a DataFrame:
df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'], 'key2' : ['one', 'two', 'one', 'two', 'one'], 'data1' : np.random.randn(5), 'data2' : np.random.randn(5)}) df
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Suppose you wanted to *compute the mean of the data1 column* using the *labels from key1*. There are a number of ways to do this. * One is to access data1 and call groupby with the column (a Series) at key1:
grouped = df['data1'].groupby(df['key1']) grouped
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
This grouped variable is now a GroupBy object. It has not actually computed anything yet except for some intermediate data about the group key df['key1']. The idea is that this object has all of the information needed to then apply some operation to each of the groups. For example, to compute group means we can call the GroupBy’s mean method:
grouped.mean()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Later, I’ll explain more about what happens when you call .mean(). The important thing here is that `the data (a Series) has been aggregated according to the group key`, producing a new Series that is now indexed by the unique values in the key1 column. The result index has the name 'key1' because the DataFrame column df['key1'] did. If instead we had passed multiple arrays as a list, we’d get something different:
means = df['data1'].groupby([df['key1'], df['key2']]).mean() means
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Here we grouped the data using two keys (@P:key 1 + key 2 above), and the resulting Series now has a hierarchical index consisting of the unique pairs of keys observed:
means.unstack()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
In this example, the group keys are all Series, though they could be any arrays of the right length:
states = np.array(['Ohio', 'California', 'California', 'Ohio', 'Ohio']) years = np.array([2005, 2005, 2006, 2005, 2006]) df['data1'].groupby([states, years]).mean()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Frequently the grouping information is found in the same DataFrame as the data you want to work on. In that case, you can pass column names (whether those are strings, numbers, or other Python objects) as the group keys:
df.groupby('key1').mean() df.groupby(['key1', 'key2']).mean()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
You may have noticed in the first case df.groupby('key1').mean() that there is no key2 column in the result. *Because df['key2'] is not numeric data*, it is said to be a `nuisance column`, which is therefore excluded from the result. `By default, all of the numeric columns are aggregated`, though it is possible to filter down to a subset, as you’ll see soon. Regardless of the objective in using groupby, a generally useful GroupBy method is size, which returns a Series containing group sizes:
df.groupby(['key1', 'key2']).size()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Take note that any missing values in a group key will be excluded from the result. 10.1.1. Iterating Over Groups The `GroupBy object supports iteration`, generating a sequence of 2-tuples containing the group name along with the chunk of data. Consider the following:
for name, group in df.groupby('key1'): print(name) print(group)
a key1 key2 data1 data2 0 a one -0.204708 1.393406 1 a two 0.478943 0.092908 4 a one 1.965781 1.246435 b key1 key2 data1 data2 2 b one -0.519439 0.281746 3 b two -0.555730 0.769023
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
In the case of multiple keys, the first element in the tuple will be a tuple of key values:
for (k1, k2), group in df.groupby(['key1', 'key2']): print((k1, k2)) print(group)
('a', 'one') key1 key2 data1 data2 0 a one -0.204708 1.393406 4 a one 1.965781 1.246435 ('a', 'two') key1 key2 data1 data2 1 a two 0.478943 0.092908 ('b', 'one') key1 key2 data1 data2 2 b one -0.519439 0.281746 ('b', 'two') key1 key2 data1 data2 3 b two -0.55573 0.769023
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Of course, you can choose to do whatever you want with the pieces of data. *A recipe you may find useful* is `computing a dict of the data pieces as a one-liner`:
pieces = dict(list(df.groupby('key1'))) pieces # pieces['b']
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
By default groupby groups on `axis=0`, but you `can group on any of the other axes`. For example, we could group the columns of our example df here by dtype like so:
df.dtypes grouped = df.groupby(df.dtypes, axis=1) grouped #We can print out the groups like so: for dtype, group in grouped: print(dtype) print(group)
float64 data1 data2 0 -0.204708 1.393406 1 0.478943 0.092908 2 -0.519439 0.281746 3 -0.555730 0.769023 4 1.965781 1.246435 object key1 key2 0 a one 1 a two 2 b one 3 b two 4 a one
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
10.1.2. Selecting a Column or Subset of Columns Indexing a GroupBy object created from a DataFrame with a column name or array of column names has the effect of column subsetting for aggregation. This means that:
df.groupby('key1')['data1'] df.groupby('key1')[['data2']]
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
are `syntactic sugar` for:
df['data1'].groupby(df['key1']) df[['data2']].groupby(df['key1'])
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Especially for large datasets, it may be desirable to aggregate only a few columns. For example, in the preceding dataset, to *compute means for just the data2 column* and get the result as a DataFrame, we could write:
df.groupby(['key1', 'key2'])[['data2']].mean()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
`The object returned by this indexing operation is a grouped DataFrame` if a *list or array* is passed or` a grouped Series` if only a *single column name is passed* as a scalar:
s_grouped = df.groupby(['key1', 'key2'])['data2'] s_grouped s_grouped.mean()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
10.1.3. Grouping with Dicts and Series Grouping information may exist in a form other than an array. Let’s consider another example DataFrame:
people = pd.DataFrame(np.random.randn(5, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis']) people.iloc[2:3, [1, 2]] = np.nan # Add a few NA values people
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Now, suppose I have a group correspondence for the columns and want to sum together the columns by group:
mapping = {'a': 'red', 'b': 'red', 'c': 'blue', 'd': 'blue', 'e': 'red', 'f' : 'orange'} mapping
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Now, you could construct an array from this dict to pass to groupby, but instead we can just pass the dict (I included the key 'f' to highlight that unused grouping keys are OK):
by_column = people.groupby(mapping, axis=1) by_column.sum()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
`The same functionality holds for Series`, which can be viewed as a *fixedsize mapping*:
map_series = pd.Series(mapping) map_series people.groupby(map_series, axis=1).count()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
10.1.4. Grouping with Functions Using Python functions is a more generic way of defining a group mapping compared with a dict or Series. `Any function passed as a group key will be called once per index value, with the return values being used as the group names`. More concretely, consider the example DataFrame from the previous section, which has people’s first names as index values. Suppose you wanted to group by the length of the names; while you could compute an array of string lengths, it’s simpler to just pass the len function:
people.groupby(len).sum()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Mixing functions with arrays, dicts, or Series is not a problem as everything gets converted to arrays internally:
key_list = ['one', 'one', 'one', 'two', 'two'] people.groupby([len, key_list]).min() #@P 20210901: not understand how key_list function in this syntax
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
10.1.5. Grouping by Index Levels A final convenience for hierarchically indexed datasets is the ability to aggregate using one of the levels of an axis index. Let’s look at an example:
columns = pd.MultiIndex.from_arrays([['US', 'US', 'US', 'JP', 'JP'], [1, 3, 5, 1, 3]], names=['cty', 'tenor']) hier_df = pd.DataFrame(np.random.randn(4, 5), columns=columns) hier_df
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
* *To group by level*, pass the level number or name using the level keyword:
hier_df.groupby(level='cty', axis=1).count()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
10.2. Data Aggregation `Aggregations` refer to *any data transformation that produces scalar values from arrays*. The preceding examples have used several of them, including *mean, count, min, and sum*. While `quantile` is not explicitly implemented for GroupBy, it is a Series method and thus available for use. Internally, *GroupBy efficiently slices up the Series*, *calls piece.quantile(0.9) for each piece*, and then *assembles those results together into the result object*:
df grouped = df.groupby('key1') grouped['data1'].quantile(0.9)
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
To use your own aggregation functions, pass any function that aggregates an array to the `aggregate` or `agg method`:
def peak_to_peak(arr): return arr.max() - arr.min() grouped.agg(peak_to_peak) def peak_to_peak(arr): return arr.max() - arr.min() grouped.aggregate(peak_to_peak) #@P 20210902: due to the warning that SeriesGroupBy.agg is deprecated -> change to this method but it seems to me that the results are not the same
D:\Users\phu.le2\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py:303: FutureWarning: Dropping invalid columns in SeriesGroupBy.agg is deprecated. In a future version, a TypeError will be raised. Before calling .agg, select only columns which should be valid for the aggregating function. results[key] = self.aggregate(func)
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
You may notice that some methods like `describe` also work, even though they are not aggregations, strictly speaking:
grouped.describe()
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
**NOTE** *Custom aggregation functions are generally much slower than the optimized functions found in Table 10-1* *(count, sum, mean, meadiam, std, var, min, max, prod, first, last)*. This is because there is some extra overhead (function calls, data rearrangement) in constructing the intermediate group data chunks. 10.2.1. Column-Wise and Multiple Function Application Let’s return to the tipping dataset from earlier examples. After loading it with read_csv, we add a tipping percentage column tip_pct:
tips = pd.read_csv('examples/tips.csv') tips.head() # Add tip percentage of total bill tips['tip_pct'] = tips['tip'] / tips['total_bill'] tips[:6]
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
As you’ve already seen, aggregating a Series or all of the columns of a DataFrame is a matter of using `aggregate` with the desired function or calling a method like mean or std. However, you may want to aggregate using a different function depending on the column, or multiple functions at once. Fortunately, this is possible to do, which I’ll illustrate through a number of examples. First, I’ll group the tips by day and smoker:
rouped = tips.groupby(['day', 'smoker']) grouped
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Note that for descriptive statistics like those in Table 10-1, you can pass the name of the function as a string:
grouped_pct = grouped['tip_pct'] grouped_pct grouped_pct.agg('mean')
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
If you p*ass a list of functions or function names* instead, you get back a DataFrame with column names taken from the functions:
grouped_pct.agg(['mean', 'std', peak_to_peak])
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
Here we passed a list of aggregation functions to `agg` to evaluate indepedently on the data groups. `You don’t need to accept the names that GroupBy gives to the columns`; notably, lambda functions have the name '', which makes them hard to identify (you can see for yourself by looking at a function’s __name__ attribute). Thus, *if you pass a list of (name, function) tuples,* `the first element of each tuple will be used as the DataFrame column names` (you can think of a list of 2-tuples as an ordered mapping):
grouped_pct.agg([('foo', 'mean'), ('bar', np.std)]) #@P: mean and std but are named as foo and bar
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
With a DataFrame you have more options, as you can specify a list of functions to apply to all of the columns or different functions per column. To start, suppose we wanted to compute the same three statistics for the *tip_pct* and *total_bill* columns:
functions = ['count', 'mean', 'max'] result = grouped['tip_pct', 'total_bill'].agg(functions) result grouped #to deal with FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead. #https://stackoverflow.com/questions/61634759/python-futurewarning-indexing-with-multiple-keys-implicitly-converted-to-a-tup -> extra bracket to solve it functions = ['count', 'mean', 'max'] result = grouped[['tip_pct', 'total_bill']].agg(functions) result
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney
As you can see, the resulting DataFrame has hierarchical columns, the same as you would get aggregating each column separately and using `concat` to glue the results together using the column names as the keys argument:
result['tip_pct']
_____no_output_____
MIT
10_Data Aggregation and Group Operations.ipynb
quangphu1912/Py-data-analysis-McKinney