markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Export the data and model configuration**Note:** Before exporting data ensure that Neptune Export has been configured as described here: [Neptune Export Service](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-data-export-service.htmlmachine-learning-data-export-service-run-export) With our product knowledge graph loaded we are ready to export the data and configuration which will be used to train the ML model. The export process is triggered by calling to the [Neptune Export service endpoint](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-data-export-service.html). This call contains a configuration object which specifies the type of machine learning model to build, in this example node classification, as well as any feature configurations required. Note: The configuration used in this notebook specifies only a minimal set of configuration options meaning that our model's predictions are not as accurate as they could be. The parameters included in this configuration are one of a couple of sets of options available to the end user to tune the model and optimize the accuracy of the resulting predictions.The configuration options provided to the export service are broken into two main sections, selecting the target and configuring features. Selecting the targetIn the first section, selecting the target, we specify what type of machine learning task will be run. To run a link prediction mdoel do not specify any `targets` in the `additionalParams` value. Unlike node classification or node regression, link prediction can be used to predict any edge type that exists in the graph between any two vertices. Becasue of this, there is no need to define a target set of values. Configuring featuresThe second section of the configuration, configuring features, is where we specify details about the types of data stored in our graph and how the machine learning model should interpret that data. In machine learning, each property is known as a feature and these features are used by the model to make predictions. In machine learning, each property is known as a feature and these features are used by the model to make predictions. When data is exported from Neptune all properties of all vertices are included. Each property is treated as a separate feature for the ML model. Neptune ML does its best to infer the correct type of feature for a property, in many cases, the accuracy of the model can be improved by specifying information about the property used for a feature. By default Neptune ML puts features into one of two categories:* If the feature represents a numerical property (float, double, int) then it is treated as a `numerical` feature type. In this feature type data is represented as a continuous set of numbers. In our example, the `age` of a `user` would best be represented as a numerical feature as the age of a user is best represented as a continuous set of values.* All other property types are represented as `category` features. In this feature type, each unique value of data is represented as a unique value in the set of classifications used by the model. In our MovieLens example the `occupation` of a `user` would represent a good example of a `category` feature as we want to group users that all have the same job.If all of the properties fit into these two feature types then no configuration changes are needed at the time of export. However, in many scenarios these defaults are not always the best choice. In these cases, additional configuration options should be specified to better define how the property should be represented as a feature. One common feature that needs additional configuration is numerical data, and specifically properties of numerical data that represent chunks or groups of items instead of a continuous stream.Let's say that instead of wanting `age` to be represented as a set of continuous values we want to represent it as a set of discrete buckets of values (e.g. 18-25, 26-24, 35-44, etc.). In this scenario we want to specify some additional attributes of that feature to bucket this attribute into certain known sets. We achieve this by specifying this feature as a `numerical_bucket`. This feature type takes a range of expected values, as well as a number of buckets, and groups data into buckets during the training process.Another common feature that needs additional attributes are text features such as names, titles, or descriptions. While Neptune ML will treat these as categorical features by default the reality of these features is that they will likely be unique for each node. For example, since the `title` property of a `movie` node does not fit into a category grouping our model would be better served by representing this type of feature as a `text_word2vec` feature. A `text_word2vec` feature uses techniques from natural language processing to create a vector of data that represents a string of text. In our export example below we have specified that the `title` property of our `movie` should be exported and trained as a `text_word2vec` feature and that our `age` field should range from 0-100 and that data should be bucketed into 10 distinct groups. Important: The example below is an example of a minimal amount of the features of the model configuration parameters and will not create the most accurate model possible. Additional options are available for tuning this configuration to produce an optimal model are described here: Neptune Export Process ParametersRunning the cell below we set the export configuration and run the export process. Neptune export is capable of automatically creating a clone of the cluster by setting `cloneCluster=True` which takes about 20 minutes to complete and will incur additional costs while the cloned cluster is running. Exporting from the existing cluster takes about 5 minutes but requires that the `neptune_query_timeout` parameter in the [parameter group](https://docs.aws.amazon.com/neptune/latest/userguide/parameters.html) is set to a large enough value (>72000) to prevent timeout errors.
export_params={ "command": "export-pg", "params": { "endpoint": neptune_ml.get_host(), "profile": "neptune_ml", "cloneCluster": False }, "outputS3Path": f'{s3_bucket_uri}/neptune-export', "additionalParams": { "neptune_ml": { "version": "v2.0", "features": [ { "node": "movie", "property": "title", "type": "word2vec" }, { "node": "user", "property": "age", "type": "bucket_numerical", "range" : [1, 100], "num_buckets": 10 } ] } }, "jobSize": "medium"} %%neptune_ml export start --export-url {neptune_ml.get_export_service_host()} --export-iam --wait --store-to export_results ${export_params}
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
ML data processing, model training, and endpoint creationOnce the export job is completed we are now ready to train our machine learning model and create the inference endpoint. Training our Neptune ML model requires three steps. Note: The cells below only configure a minimal set of parameters required to run a model training. Data processingThe first step (data processing) processes the exported graph dataset using standard feature preprocessing techniques to prepare it for use by DGL. This step performs functions such as feature normalization for numeric data and encoding text features using word2vec. At the conclusion of this step the dataset is formatted for model training. This step is implemented using a SageMaker Processing Job and data artifacts are stored in a pre-specified S3 location once the job is complete.Additional options and configuration parameters for the data processing job can be found using the links below:* [Data Processing](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-on-graphs-processing.html)* [dataprocessing command](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-api-dataprocessing.html)Run the cells below to create the data processing configuration and to begin the processing job.
# The training_job_name can be set to a unique value below, otherwise one will be auto generated training_job_name=neptune_ml.get_training_job_name('link-prediction') processing_params = f""" --config-file-name training-data-configuration.json --job-id {training_job_name} --s3-input-uri {export_results['outputS3Uri']} --s3-processed-uri {str(s3_bucket_uri)}/preloading """ %neptune_ml dataprocessing start --wait --store-to processing_results {processing_params}
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
Model trainingThe second step (model training) trains the ML model that will be used for predictions. The model training is done in two stages. The first stage uses a SageMaker Processing job to generate a model training strategy. A model training strategy is a configuration set that specifies what type of model and model hyperparameter ranges will be used for the model training. Once the first stage is complete, the SageMaker Processing job launches a SageMaker Hyperparameter tuning job. The SageMaker Hyperparameter tuning job runs a pre-specified number of model training job trials on the processed data, and stores the model artifacts generated by the training in the output S3 location. Once all the training jobs are complete, the Hyperparameter tuning job also notes the training job that produced the best performing model.Additional options and configuration parameters for the data processing job can be found using the links below:* [Model Training](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-on-graphs-model-training.html)* [modeltraining command](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-api-modeltraining.html)Information: Link prediction is a more computationally complex model than classification or regression so training this model will take 2-3 hours
training_params=f""" --job-id {training_job_name} --data-processing-id {training_job_name} --instance-type ml.p3.2xlarge --s3-output-uri {str(s3_bucket_uri)}/training --max-hpo-number 2 --max-hpo-parallel 2 """ %neptune_ml training start --wait --store-to training_results {training_params}
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
Endpoint creationThe final step is to create the inference endpoint which is an Amazon SageMaker endpoint instance that is launched with the model artifacts produced by the best training job. This endpoint will be used by our graph queries to return the model predictions for the inputs in the request. The endpoint once created stays active until it is manually deleted. Each model is tied to a single endpoint.Additional options and configuration parameters for the data processing job can be found using the links below:* [Inference Endpoint](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-on-graphs-inference-endpoint.html)* [Endpoint command](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-api-endpoints.html)Information: The endpoint creation process takes ~5-10 minutes
endpoint_params=f""" --id {training_job_name} --model-training-job-id {training_job_name}""" %neptune_ml endpoint create --wait --store-to endpoint_results {endpoint_params}
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
Once this has completed we get the endpoint name for our newly created inference endpoint. The cell below will set the endpoint name which will be used in the Gremlin queries below.
endpoint=endpoint_results['endpoint']['name']
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
Querying using GremlinNow that we have our inference endpoint setup let's query our product knowledge graph to show how to predict how likely it is that a user will rate a movie. The need to predict the likelyhood of connections in a product knowledge graph is commonly used to provide recommendations for products that a customer might purchase.Unlike node classification and node regression, link prediction can infer any of the edge labels that existed in our graph when the model was created. In our model this means we could infer the probability that a `wrote`, `about`, `rated`, or `included_in` edge exists between any two vertices. However for this example we are going to focus on inferring the `rated` edges between the `user` and `movie` vertices. Predicting what movies a user will rateBefore we predict what movies `user_1` is most likely to rate let's verify that our graph does not contain any `rated` edges for `user_1`.
%%gremlin g.V('user_1').out('rated').hasLabel('movie').valueMap()
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
As expected, their are not any `rated` edges for `user_1`. Maybe `user_1` is a new user in our system and we want to provide them some product recommendations. Let's modify the query to predict what movies `user_1` is most likely to rate. First, we add the `with()` step to specify the inference endpoint we want to use with our Gremlin query like this`g.with("Neptuneml.endpoint","")`. Note: The endpoint values are automatically passed into the queries belowSecond, when we ask for the link within our query we use the `out()` step to predict the target node or the `in()` step to predict the source node. For each of these steps we need to specify the type of model being used with a with() step (`with("Neptuneml.prediction")`).Putting these items together we get the query below, which returns the movies that` user_1` is likely to rate.
%%gremlin g.with("Neptune#ml.endpoint","${endpoint}"). V('user_1').out('rated').with("Neptune#ml.prediction").hasLabel('movie').valueMap('title')
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
Great, we can now see that we are predicted edges showing that `Sleepers` is the movie that `user_1` is most likely to rate. In the example above we predicted the target node but we can also use the same mechanism to predict the source node. Let's turn that question around and say that we had a product and we wanted to find the people most likely to rate this product. Predicting the top 10 users most likely to rate a movieTo accomplish this we would want to start at the movie vertex and predict the rated edge back to the user. Since we want to return the top 10 recommended users we need to use the `.with("Neptuneml.limit",10)` configuration option. Combining these together we get the query below which finds the top 10 users most likely to rate `Apollo 13`.
%%gremlin g.with("Neptune#ml.endpoint","${endpoint}"). with("Neptune#ml.limit",10). V().has('title', 'Apollo 13 (1995)'). in('rated').with("Neptune#ml.prediction").hasLabel('user').id()
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
With that we have sucessfully been able to show how you can use link prediction to predict edges starting at either end. From the examples we have shown here you can begin to see how the ability to infer unknown connections within a graph starts to enable many interesting and unique use cases within Amazon Neptune. Cleaning Up Now that you have completed this walkthrough you have created a Sagemaker endpoint which is currently running and will incur the standard charges. If you are done trying out Neptune ML and would like to avoid these recurring costs, run the cell below to delete the inference endpoint.
neptune_ml.delete_endpoint(training_job_name)
_____no_output_____
ISC
src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb
zacharyrs/graph-notebook
Desafio01: Encontrar os valores relativos para as idades dos incritosDesafio02: Descobrir de quais estados são os inscritos com 13 anos. Desafio03: Colocar título no gráfico
# Mostrando histograma das idades dos participantes dados[ "NU_IDADE"].hist(bins = 50, figsize = (16, 7)) # Fazendo query no bando de dados para pegar somente observacoes de treineiros, depois mostrando o total das idades em ordem crescente dados.query("IN_TREINEIRO == 1")["NU_IDADE"].value_counts().sort_index().head(5)
_____no_output_____
MIT
Alura/imersaoDados02/aula 2/ImersaoDados02Aula02.ipynb
W8jonas/estudos
Desafio04: Plotar os histogramas das idades dos treineiros e não treineiros
# Plotando histograma das notas da redacao dados["NU_NOTA_REDACAO"].hist(bins = 50, figsize=(19, 7)) # Plotando histograma das notas da prova de Ciências Humanas dados["NU_NOTA_CH"].hist(bins = 50, figsize=(19, 7)) # Fazendo query no bando de dados para pegar somente observacoes do Estado de Minas # Depois separando somente as notas das provas # Por fim mostrando dados estatísticos provas = ["NU_NOTA_LC", "NU_NOTA_CH", "NU_NOTA_MT", "NU_NOTA_CN", "NU_NOTA_REDACAO"] dados.query("SG_UF_RESIDENCIA == 'MG'")[provas].describe() # Fazendo query no bando de dados para pegar somente observacoes do Estado de Minas # Depois separando somente as notas das provas # Por fim plotando grafico boxplot contendo as notas das provas provas = ["NU_NOTA_LC", "NU_NOTA_CH", "NU_NOTA_MT", "NU_NOTA_CN", "NU_NOTA_REDACAO"] dados.query("SG_UF_RESIDENCIA == 'MG'")[provas].boxplot(grid = 1, figsize=(18,8))
_____no_output_____
MIT
Alura/imersaoDados02/aula 2/ImersaoDados02Aula02.ipynb
W8jonas/estudos
Desafio05: Comparar as distribuições das provas em inglês e espanhol nas provas de LC
#Resolvendo desafio 01 dados.query("NU_IDADE <= 14")["SG_UF_RESIDENCIA"].value_counts(normalize=True) renda_ordenada = dados["Q006"].unique() renda_ordenada.sort() print(renda_ordenada) import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(18, 8)) sns.boxplot(x="Q006", y = "NU_NOTA_REDACAO", data = dados, order = renda_ordenada) plt.title("Boxplot das notas de matemática pela renda") total_notas_por_aluno = dados[provas].sum(axis=1) dados["NU_NOTA_TOTAL"] = total_notas_por_aluno dados.head() plt.figure(figsize=(18, 8)) dados_sem_zeros = dados.query("NU_NOTA_TOTAL != 0") sns.boxplot(x="Q006", y = "NU_NOTA_TOTAL", data = dados_sem_zeros, order = renda_ordenada) plt.title("Boxplot das notas todais pela renda") sns.displot(dados, x = "NU_NOTA_TOTAL") plt.figure(figsize=(18, 8)) sns.boxplot(x="Q006", y = "NU_NOTA_TOTAL", data = dados_sem_zeros, order = renda_ordenada, hue= "IN_TREINEIRO") plt.title("Boxplot das notas todais pela renda")
_____no_output_____
MIT
Alura/imersaoDados02/aula 2/ImersaoDados02Aula02.ipynb
W8jonas/estudos
Nessie Iceberg/Flink SQL Demo with NBA Dataset============================This demo showcases how to use Nessie Python API along with Flink from IcebergInitialize PyFlink----------------------------------------------To get started, we will first have to do a few setup steps that give us everything we needto get started with Nessie. In case you're interested in the detailed setup steps for Flink, you can check out the [docs](https://projectnessie.org/tools/flink/)The Binder server has downloaded flink and some data for us as well as started a Nessie server in the background. All we have to do is start FlinkThe below cell starts a local Flink session with parameters needed to configure Nessie. Each config option is followed by a comment explaining its purpose.
import os from pyflink.datastream import StreamExecutionEnvironment from pyflink.table import StreamTableEnvironment from pyflink.table.expressions import lit from pynessie import init # where we will store our data warehouse = os.path.join(os.getcwd(), "flink-warehouse") # this was downloaded when Binder started, its available on maven central iceberg_flink_runtime_jar = os.path.join(os.getcwd(), "../iceberg-flink-runtime-0.12.0.jar") env = StreamExecutionEnvironment.get_execution_environment() env.add_jars("file://{}".format(iceberg_flink_runtime_jar)) table_env = StreamTableEnvironment.create(env) nessie_client = init() def create_ref_catalog(ref): """ Create a flink catalog that is tied to a specific ref. In order to create the catalog we have to first create the branch """ hash_ = nessie_client.get_reference(nessie_client.get_default_branch()).hash_ try: nessie_client.create_branch(ref, hash_) except: pass # already created # The important args below are: # type: tell Flink to use Iceberg as the catalog # catalog-impl: which Iceberg catalog to use, in this case we want Nessie # uri: the location of the nessie server. # ref: the Nessie ref/branch we want to use (defaults to main) # warehouse: the location this catalog should store its data table_env.execute_sql( f"""CREATE CATALOG {ref}_catalog WITH ( 'type'='iceberg', 'catalog-impl'='org.apache.iceberg.nessie.NessieCatalog', 'uri'='http://localhost:19120/api/v1', 'ref'='{ref}', 'warehouse' = '{warehouse}')""" ) create_ref_catalog(nessie_client.get_default_branch()) print("\n\n\nFlink running\n\n\n")
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Solving Data Engineering problems with Nessie============================In this Demo we are a data engineer working at a fictional sports analytics blog. In order for the authors to write articles they have to have access to the relevant data. They need to be able to retrieve data quickly and be able to create charts with it.We have been asked to collect and expose some information about basketball players. We have located some data sources and are now ready to start ingesting data into our data lakehouse. We will perform the ingestion steps on a Nessie branch to test and validate the data before exposing to the analysts. Set up Nessie branches (via Nessie CLI)----------------------------Once all dependencies are configured, we can get started with ingesting our basketball data into `Nessie` with the following steps:- Create a new branch named `dev`- List all branchesIt is worth mentioning that we don't have to explicitly create a `main` branch, since it's the default branch.
create_ref_catalog("dev")
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
We have created the branch `dev` and we can see the branch with the Nessie `hash` its currently pointing to.Below we list all branches. Note that the auto created `main` branch already exists and both branches point at the same `hash`
!nessie --verbose branch
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Create tables under dev branch-------------------------------------Once we created the `dev` branch and verified that it exists, we can create some tables and add some data.We create two tables under the `dev` branch:- `salaries`- `totals_stats`These tables list the salaries per player per year and their stats per year.To create the data we:1. switch our branch context to dev2. create the table3. insert the data from an existing csv file. This csv file is already stored locally on the demo machine. A production use case would likely take feeds from official data sources
# Load the dataset from pyflink.table import DataTypes from pyflink.table.descriptors import Schema, OldCsv, FileSystem # Creating `salaries` table (table_env.connect(FileSystem().path('../datasets/nba/salaries.csv')) .with_format(OldCsv() .field('Season', DataTypes.STRING()).field("Team", DataTypes.STRING()) .field("Salary", DataTypes.STRING()).field("Player", DataTypes.STRING())) .with_schema(Schema() .field('Season', DataTypes.STRING()).field("Team", DataTypes.STRING()) .field("Salary", DataTypes.STRING()).field("Player", DataTypes.STRING())) .create_temporary_table('dev_catalog.nba.salaries_temp')) table_env.execute_sql("""CREATE TABLE IF NOT EXISTS dev_catalog.nba.salaries (Season STRING, Team STRING, Salary STRING, Player STRING)""").wait() tab = table_env.from_path('dev_catalog.nba.salaries_temp') tab.execute_insert('dev_catalog.nba.salaries').wait() # Creating `totals_stats` table (table_env.connect(FileSystem().path('../datasets/nba/totals_stats.csv')) .with_format(OldCsv() .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING()) .field("ORB", DataTypes.STRING()).field("DRB", DataTypes.STRING()).field("TRB", DataTypes.STRING()) .field("AST", DataTypes.STRING()).field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING()) .field("TOV", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING()) .field("RSorPO", DataTypes.STRING())) .with_schema(Schema() .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING()) .field("ORB", DataTypes.STRING()).field("DRB", DataTypes.STRING()).field("TRB", DataTypes.STRING()) .field("AST", DataTypes.STRING()).field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING()) .field("TOV", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING()) .field("RSorPO", DataTypes.STRING())) .create_temporary_table('dev_catalog.nba.totals_stats_temp')) table_env.execute_sql( """CREATE TABLE IF NOT EXISTS dev_catalog.nba.totals_stats (Season STRING, Age STRING, Team STRING, ORB STRING, DRB STRING, TRB STRING, AST STRING, STL STRING, BLK STRING, TOV STRING, PTS STRING, Player STRING, RSorPO STRING)""").wait() tab = table_env.from_path('dev_catalog.nba.totals_stats_temp') tab.execute_insert('dev_catalog.nba.totals_stats').wait() salaries = table_env.from_path('main_catalog.nba.`salaries@dev`').select(lit(1).count).to_pandas().values[0][0] totals_stats = table_env.from_path('main_catalog.nba.`totals_stats@dev`').select(lit(1).count).to_pandas().values[0][0] print(f"\n\n\nAdded {salaries} rows to the salaries table and {totals_stats} rows to the total_stats table.\n\n\n")
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Now we count the rows in our tables to ensure they are the same number as the csv files. Note we use the `table@branch` notation which overrides the context set by the catalog.
table_count = table_env.from_path('dev_catalog.nba.`salaries@dev`').select('Season.count').to_pandas().values[0][0] csv_count = table_env.from_path('dev_catalog.nba.salaries_temp').select('Season.count').to_pandas().values[0][0] assert table_count == csv_count print(table_count) table_count = table_env.from_path('dev_catalog.nba.`totals_stats@dev`').select('Season.count').to_pandas().values[0][0] csv_count = table_env.from_path('dev_catalog.nba.totals_stats_temp').select('Season.count').to_pandas().values[0][0] assert table_count == csv_count print(table_count)
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Check generated tables----------------------------Since we have been working solely on the `dev` branch, where we created 2 tables and added some data,let's verify that the `main` branch was not altered by our changes.
!nessie contents --list
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
And on the `dev` branch we expect to see two tables
!nessie contents --list --ref dev
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
We can also verify that the `dev` and `main` branches point to different commits
!nessie --verbose branch
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Dev promotion into main-----------------------Once we are done with our changes on the `dev` branch, we would like to merge those changes into `main`.We merge `dev` into `main` via the command line `merge` command.Both branches should be at the same revision after merging/promotion.
!nessie merge dev -b main --force
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
We can verify the branches are at the same hash and that the `main` branch now contains the expected tables and row counts.The tables are now on `main` and ready for consumtion by our blog authors and analysts!
!nessie --verbose branch !nessie contents --list table_count = table_env.from_path('main_catalog.nba.salaries').select('Season.count').to_pandas().values[0][0] csv_count = table_env.from_path('dev_catalog.nba.salaries_temp').select('Season.count').to_pandas().values[0][0] assert table_count == csv_count table_count = table_env.from_path('main_catalog.nba.totals_stats').select('Season.count').to_pandas().values[0][0] csv_count = table_env.from_path('dev_catalog.nba.totals_stats_temp').select('Season.count').to_pandas().values[0][0] assert table_count == csv_count
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Perform regular ETL on the new tables-------------------Our analysts are happy with the data and we want to now regularly ingest data to keep things up to date. Our first ETL job consists of the following:1. Update the salaries table to add new data2. We have decided the `Age` column isn't required in the `total_stats` table so we will drop the column3. We create a new table to hold information about the players appearances in all star gamesAs always we will do this work on a branch and verify the results. This ETL job can then be set up to run nightly with new stats and salary information.
create_ref_catalog("etl") # add some salaries for Kevin Durant table_env.execute_sql("""INSERT INTO etl_catalog.nba.salaries VALUES ('2017-18', 'Golden State Warriors', '$25000000', 'Kevin Durant'), ('2018-19', 'Golden State Warriors', '$30000000', 'Kevin Durant'), ('2019-20', 'Brooklyn Nets', '$37199000', 'Kevin Durant'), ('2020-21', 'Brooklyn Nets', '$39058950', 'Kevin Durant')""").wait() # Rename the table `totals_stats` to `new_total_stats` table_env.execute_sql("ALTER TABLE etl_catalog.nba.totals_stats RENAME TO etl_catalog.nba.new_total_stats").wait() # Creating `allstar_games_stats` table (table_env.connect(FileSystem().path('../datasets/nba/allstar_games_stats.csv')) .with_format(OldCsv() .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING()) .field("ORB", DataTypes.STRING()).field("TRB", DataTypes.STRING()).field("AST", DataTypes.STRING()) .field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING()).field("TOV", DataTypes.STRING()) .field("PF", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING())) .with_schema(Schema() .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING()) .field("ORB", DataTypes.STRING()).field("TRB", DataTypes.STRING()).field("AST", DataTypes.STRING()) .field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING()).field("TOV", DataTypes.STRING()) .field("PF", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING())) .create_temporary_table('etl_catalog.nba.allstar_games_stats_temp')) table_env.execute_sql( """CREATE TABLE IF NOT EXISTS etl_catalog.nba.allstar_games_stats (Season STRING, Age STRING, Team STRING, ORB STRING, TRB STRING, AST STRING, STL STRING, BLK STRING, TOV STRING, PF STRING, PTS STRING, Player STRING)""").wait() tab = table_env.from_path('etl_catalog.nba.allstar_games_stats_temp') tab.execute_insert('etl_catalog.nba.allstar_games_stats').wait() # Notice how we view the data on the etl branch via @etl table_env.from_path('etl_catalog.nba.`allstar_games_stats@etl`').to_pandas()
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
We can verify that the new table isn't on the `main` branch but is present on the etl branch
# Since we have been working on the `etl` branch, the `allstar_games_stats` table is not on the `main` branch !nessie contents --list # We should see `allstar_games_stats` and the `new_total_stats` on the `etl` branch !nessie contents --list --ref etl
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Now that we are happy with the data we can again merge it into `main`
!nessie merge etl -b main --force
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Now lets verify that the changes exist on the `main` branch and that the `main` and `etl` branches have the same `hash`
!nessie contents --list !nessie --verbose branch table_count = table_env.from_path('main_catalog.nba.allstar_games_stats').select('Season.count').to_pandas().values[0][0] csv_count = table_env.from_path('etl_catalog.nba.allstar_games_stats_temp').select('Season.count').to_pandas().values[0][0] assert table_count == csv_count
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Create `experiment` branch--------------------------------As a data analyst we might want to carry out some experiments with some data, without affecting `main` in any way.As in the previous examples, we can just get started by creating an `experiment` branch off of `main`and carry out our experiment, which could consist of the following steps:- drop `totals_stats` table- add data to `salaries` table- compare `experiment` and `main` tables
create_ref_catalog("experiment") # Drop the `totals_stats` table on the `experiment` branch table_env.execute_sql("DROP TABLE IF EXISTS experiment_catalog.nba.new_total_stats") # add some salaries for Dirk Nowitzki table_env.execute_sql("""INSERT INTO experiment_catalog.nba.salaries VALUES ('2015-16', 'Dallas Mavericks', '$8333333', 'Dirk Nowitzki'), ('2016-17', 'Dallas Mavericks', '$25000000', 'Dirk Nowitzki'), ('2017-18', 'Dallas Mavericks', '$5000000', 'Dirk Nowitzki'), ('2018-19', 'Dallas Mavericks', '$5000000', 'Dirk Nowitzki')""").wait() # We should see the `salaries` and `allstar_games_stats` tables only (since we just dropped `new_total_stats`) !nessie contents --list --ref experiment # `main` hasn't changed been changed and still has the `new_total_stats` table !nessie contents --list
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Let's take a look at the contents of the `salaries` table on the `experiment` branch.Notice the use of the `nessie` catalog and the use of `@experiment` to view data on the `experiment` branch
table_env.from_path('main_catalog.nba.`salaries@experiment`').select(lit(1).count).to_pandas()
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
and compare to the contents of the `salaries` table on the `main` branch. Notice that we didn't have to specify `@branchName` as it defaultedto the `main` branch
table_env.from_path('main_catalog.nba.`salaries@main`').select(lit(1).count).to_pandas()
_____no_output_____
Apache-2.0
notebooks/nessie-iceberg-flink-demo-nba.ipynb
sandhyasun/nessie-demos
Make model
model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(input_length=500, input_dim=10000, output_dim=24)) # input layer model.add(tf.keras.layers.LSTM(24, return_sequences=True, activation='tanh')) # LSTM을 하나 더 쓸 때 model.add(tf.keras.layers.LSTM(12, activation='tanh')) # model.add(tf.keras.layers.Flatten()) # hidden layer model.add(tf.keras.layers.Dense(46, activation='softmax')) # output layer model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget # hist = model.fit(pad_x_train, y_train, epochs=5, validation_split=0.3, batch_size=128) hist = model.fit(pad_x_train, y_train, epochs=100, validation_split=0.3, batch_size=256)
Epoch 1/100 25/25 [==============================] - 20s 658ms/step - loss: 3.7195 - acc: 0.2704 - val_loss: 3.4674 - val_acc: 0.0479 Epoch 2/100 25/25 [==============================] - 16s 627ms/step - loss: 3.2134 - acc: 0.2591 - val_loss: 2.9466 - val_acc: 0.3532 Epoch 3/100 25/25 [==============================] - 16s 638ms/step - loss: 2.7723 - acc: 0.3510 - val_loss: 2.5808 - val_acc: 0.3532 Epoch 4/100 25/25 [==============================] - 16s 629ms/step - loss: 2.5263 - acc: 0.3510 - val_loss: 2.4439 - val_acc: 0.3532 Epoch 5/100 25/25 [==============================] - 16s 624ms/step - loss: 2.4537 - acc: 0.3510 - val_loss: 2.4077 - val_acc: 0.3532 Epoch 6/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4323 - acc: 0.3510 - val_loss: 2.3950 - val_acc: 0.3532 Epoch 7/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4233 - acc: 0.3510 - val_loss: 2.3891 - val_acc: 0.3532 Epoch 8/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4190 - acc: 0.3510 - val_loss: 2.3863 - val_acc: 0.3532 Epoch 9/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4164 - acc: 0.3510 - val_loss: 2.3847 - val_acc: 0.3532 Epoch 10/100 25/25 [==============================] - 16s 639ms/step - loss: 2.4149 - acc: 0.3510 - val_loss: 2.3839 - val_acc: 0.3532 Epoch 11/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4139 - acc: 0.3510 - val_loss: 2.3833 - val_acc: 0.3532 Epoch 12/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4133 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 13/100 25/25 [==============================] - 16s 649ms/step - loss: 2.4129 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 14/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4126 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 15/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 16/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4123 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 17/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 18/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 19/100 25/25 [==============================] - 16s 633ms/step - loss: 2.4120 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 20/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 21/100 25/25 [==============================] - 16s 634ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 22/100 25/25 [==============================] - 16s 632ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 23/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 24/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 25/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4029 - acc: 0.3510 - val_loss: 2.4550 - val_acc: 0.3532 Epoch 26/100 25/25 [==============================] - 16s 651ms/step - loss: 2.4105 - acc: 0.3510 - val_loss: 2.3819 - val_acc: 0.3532 Epoch 27/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 28/100 25/25 [==============================] - 16s 635ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 29/100 25/25 [==============================] - 16s 639ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 30/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4114 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 31/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 32/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 33/100 25/25 [==============================] - 16s 648ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 34/100 25/25 [==============================] - 16s 643ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3821 - val_acc: 0.3532 Epoch 35/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 36/100 25/25 [==============================] - 16s 640ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 37/100 25/25 [==============================] - 16s 644ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 38/100 25/25 [==============================] - 16s 647ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 39/100 25/25 [==============================] - 16s 633ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3826 - val_acc: 0.3532 Epoch 40/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 41/100 25/25 [==============================] - 16s 639ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 42/100 25/25 [==============================] - 16s 640ms/step - loss: 2.4122 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 43/100 25/25 [==============================] - 16s 643ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 44/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 45/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3825 - val_acc: 0.3532 Epoch 46/100 25/25 [==============================] - 16s 636ms/step - loss: 2.4118 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 47/100 25/25 [==============================] - 16s 643ms/step - loss: 2.4116 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 48/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4119 - acc: 0.3510 - val_loss: 2.3823 - val_acc: 0.3532 Epoch 49/100 25/25 [==============================] - 16s 638ms/step - loss: 2.4117 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 50/100 25/25 [==============================] - 16s 641ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3824 - val_acc: 0.3532 Epoch 51/100 25/25 [==============================] - 16s 637ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3820 - val_acc: 0.3532 Epoch 52/100 25/25 [==============================] - 16s 634ms/step - loss: 2.4115 - acc: 0.3510 - val_loss: 2.3822 - val_acc: 0.3532 Epoch 53/100 25/25 [==============================] - 16s 642ms/step - loss: 2.4104 - acc: 0.3510 - val_loss: 2.3803 - val_acc: 0.3532 Epoch 54/100 25/25 [==============================] - 16s 645ms/step - loss: 2.4060 - acc: 0.3510 - val_loss: 2.3665 - val_acc: 0.3532 Epoch 55/100 25/25 [==============================] - 16s 639ms/step - loss: 2.3382 - acc: 0.3755 - val_loss: 2.2555 - val_acc: 0.4764 Epoch 56/100 25/25 [==============================] - 16s 648ms/step - loss: 2.2553 - acc: 0.3689 - val_loss: 2.1948 - val_acc: 0.4553 Epoch 57/100 25/25 [==============================] - 16s 644ms/step - loss: 2.1991 - acc: 0.4641 - val_loss: 2.1491 - val_acc: 0.4842 Epoch 58/100 25/25 [==============================] - 16s 644ms/step - loss: 2.1441 - acc: 0.5096 - val_loss: 2.0970 - val_acc: 0.5239 Epoch 59/100 25/25 [==============================] - 16s 644ms/step - loss: 2.0953 - acc: 0.5254 - val_loss: 2.0584 - val_acc: 0.5288 Epoch 60/100 25/25 [==============================] - 16s 639ms/step - loss: 2.0709 - acc: 0.5160 - val_loss: 2.0558 - val_acc: 0.5006 Epoch 61/100 25/25 [==============================] - 16s 642ms/step - loss: 2.0219 - acc: 0.5340 - val_loss: 2.0104 - val_acc: 0.5276 Epoch 62/100 25/25 [==============================] - 16s 641ms/step - loss: 1.9885 - acc: 0.5398 - val_loss: 1.9924 - val_acc: 0.5187 Epoch 63/100 25/25 [==============================] - 16s 632ms/step - loss: 1.9596 - acc: 0.5397 - val_loss: 1.9764 - val_acc: 0.5165 Epoch 64/100 25/25 [==============================] - 16s 631ms/step - loss: 1.9213 - acc: 0.5418 - val_loss: 1.9706 - val_acc: 0.5132 Epoch 65/100 25/25 [==============================] - 16s 629ms/step - loss: 1.8728 - acc: 0.5379 - val_loss: 1.9073 - val_acc: 0.4994 Epoch 66/100 25/25 [==============================] - 16s 643ms/step - loss: 1.7927 - acc: 0.5683 - val_loss: 1.8431 - val_acc: 0.5299 Epoch 67/100 25/25 [==============================] - 16s 640ms/step - loss: 1.7446 - acc: 0.5780 - val_loss: 1.8360 - val_acc: 0.5273 Epoch 68/100 25/25 [==============================] - 16s 642ms/step - loss: 1.6942 - acc: 0.5825 - val_loss: 1.8078 - val_acc: 0.5351 Epoch 69/100 25/25 [==============================] - 16s 636ms/step - loss: 1.6616 - acc: 0.5858 - val_loss: 1.8114 - val_acc: 0.5310 Epoch 70/100 25/25 [==============================] - 16s 647ms/step - loss: 1.6356 - acc: 0.5925 - val_loss: 1.7646 - val_acc: 0.5432 Epoch 71/100 25/25 [==============================] - 16s 652ms/step - loss: 1.5982 - acc: 0.6039 - val_loss: 1.7583 - val_acc: 0.5518 Epoch 72/100 25/25 [==============================] - 16s 648ms/step - loss: 1.5679 - acc: 0.6122 - val_loss: 1.7503 - val_acc: 0.5570 Epoch 73/100 25/25 [==============================] - 16s 653ms/step - loss: 1.5409 - acc: 0.6176 - val_loss: 1.7591 - val_acc: 0.5518 Epoch 74/100 25/25 [==============================] - 16s 652ms/step - loss: 1.5137 - acc: 0.6218 - val_loss: 1.7479 - val_acc: 0.5555 Epoch 75/100 25/25 [==============================] - 16s 646ms/step - loss: 1.4868 - acc: 0.6268 - val_loss: 1.7797 - val_acc: 0.5466 Epoch 76/100 25/25 [==============================] - 16s 643ms/step - loss: 1.4703 - acc: 0.6267 - val_loss: 1.7516 - val_acc: 0.5525 Epoch 77/100 25/25 [==============================] - 16s 649ms/step - loss: 1.4460 - acc: 0.6275 - val_loss: 1.7809 - val_acc: 0.5391 Epoch 78/100 25/25 [==============================] - 16s 648ms/step - loss: 1.4209 - acc: 0.6310 - val_loss: 1.7754 - val_acc: 0.5484 Epoch 79/100 25/25 [==============================] - 16s 652ms/step - loss: 1.4056 - acc: 0.6289 - val_loss: 1.7807 - val_acc: 0.5451 Epoch 80/100 25/25 [==============================] - 16s 648ms/step - loss: 1.3871 - acc: 0.6350 - val_loss: 1.7988 - val_acc: 0.5406 Epoch 81/100 25/25 [==============================] - 16s 650ms/step - loss: 1.3677 - acc: 0.6361 - val_loss: 1.7908 - val_acc: 0.5488 Epoch 82/100 25/25 [==============================] - 16s 651ms/step - loss: 1.3643 - acc: 0.6350 - val_loss: 1.8382 - val_acc: 0.5317 Epoch 83/100 25/25 [==============================] - 16s 643ms/step - loss: 1.3414 - acc: 0.6434 - val_loss: 1.7929 - val_acc: 0.5492 Epoch 84/100 25/25 [==============================] - 16s 646ms/step - loss: 1.3253 - acc: 0.6472 - val_loss: 1.8259 - val_acc: 0.5440 Epoch 85/100 25/25 [==============================] - 16s 654ms/step - loss: 1.3071 - acc: 0.6485 - val_loss: 1.8041 - val_acc: 0.5462 Epoch 86/100 25/25 [==============================] - 16s 644ms/step - loss: 1.2946 - acc: 0.6626 - val_loss: 1.8500 - val_acc: 0.5310 Epoch 87/100 25/25 [==============================] - 16s 644ms/step - loss: 1.2779 - acc: 0.6706 - val_loss: 1.8331 - val_acc: 0.5347 Epoch 88/100 25/25 [==============================] - 16s 640ms/step - loss: 1.2590 - acc: 0.6841 - val_loss: 1.8651 - val_acc: 0.5291 Epoch 89/100 25/25 [==============================] - 16s 639ms/step - loss: 1.2441 - acc: 0.6909 - val_loss: 1.8525 - val_acc: 0.5358 Epoch 90/100 25/25 [==============================] - 16s 645ms/step - loss: 1.2360 - acc: 0.6811 - val_loss: 1.8554 - val_acc: 0.5380 Epoch 91/100 25/25 [==============================] - 16s 634ms/step - loss: 1.2183 - acc: 0.6847 - val_loss: 1.8912 - val_acc: 0.5295 Epoch 92/100 25/25 [==============================] - 16s 640ms/step - loss: 1.2373 - acc: 0.6703 - val_loss: 1.8739 - val_acc: 0.5336 Epoch 93/100 25/25 [==============================] - 16s 643ms/step - loss: 1.1995 - acc: 0.6854 - val_loss: 1.9008 - val_acc: 0.5302 Epoch 94/100 25/25 [==============================] - 16s 636ms/step - loss: 1.1730 - acc: 0.6878 - val_loss: 1.8738 - val_acc: 0.5362 Epoch 95/100 25/25 [==============================] - 16s 640ms/step - loss: 1.1558 - acc: 0.6911 - val_loss: 1.9222 - val_acc: 0.5147 Epoch 96/100 25/25 [==============================] - 16s 638ms/step - loss: 1.1452 - acc: 0.6917 - val_loss: 1.9129 - val_acc: 0.5280 Epoch 97/100 25/25 [==============================] - 16s 640ms/step - loss: 1.1313 - acc: 0.6954 - val_loss: 1.8829 - val_acc: 0.5384 Epoch 98/100 25/25 [==============================] - 16s 641ms/step - loss: 1.1093 - acc: 0.6975 - val_loss: 1.9224 - val_acc: 0.5250 Epoch 99/100 25/25 [==============================] - 16s 640ms/step - loss: 1.0904 - acc: 0.7040 - val_loss: 1.9198 - val_acc: 0.5310 Epoch 100/100 25/25 [==============================] - 16s 641ms/step - loss: 1.0828 - acc: 0.7040 - val_loss: 1.9060 - val_acc: 0.5380
Apache-2.0
reuter_LSTM.ipynb
tecktonik08/test_deeplearning
Evaluation
# 학습시켰던 데이터 model.evaluate(pad_x_train, y_train)
281/281 [==============================] - 16s 56ms/step - loss: 1.3192 - acc: 0.6544
Apache-2.0
reuter_LSTM.ipynb
tecktonik08/test_deeplearning
x_test 데이터 전처리
pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500) def pad_make(x_data): pad_x = tf.keras.preprocessing.sequence.pad_sequences(x_data, maxlen=500) return pad_x pad_make_x = pad_make(x_test) model.evaluate(pad_make_x, y_test) model.evaluate(pad_x_test, y_test)
71/71 [==============================] - 4s 57ms/step - loss: 1.9766 - acc: 0.5419
Apache-2.0
reuter_LSTM.ipynb
tecktonik08/test_deeplearning
train과 test의 acc 유사하기 때문에 학습이 잘 됨을 볼 수 있음
import matplotlib.pyplot as plt plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.show() plt.plot(hist.history['acc']) plt.plot(hist.history['val_acc']) plt.show() from sklearn.metrics import classification_report y_train_pred = model.predict(pad_x_train) y_train_pred[0] y_pred = np.argmax(y_train_pred, axis=1) y_pred.shape len(y_train) print(classification_report(y_train, y_pred)) y_test_pred = model.predict(pad_x_test) y_pred = np.argmax(y_test_pred, axis=1) print(classification_report(y_test, y_pred))
precision recall f1-score support 0 0.00 0.00 0.00 12 1 0.17 0.53 0.26 105 2 0.00 0.00 0.00 20 3 0.93 0.89 0.91 813 4 0.70 0.72 0.71 474 5 0.00 0.00 0.00 5 6 0.00 0.00 0.00 14 7 0.00 0.00 0.00 3 8 0.00 0.00 0.00 38 9 0.00 0.00 0.00 25 10 0.07 0.37 0.11 30 11 0.00 0.00 0.00 83 12 0.00 0.00 0.00 13 13 0.00 0.00 0.00 37 14 0.00 0.00 0.00 2 15 0.00 0.00 0.00 9 16 0.11 0.22 0.15 99 17 0.00 0.00 0.00 12 18 0.00 0.00 0.00 20 19 0.23 0.47 0.31 133 20 0.33 0.04 0.08 70 21 0.00 0.00 0.00 27 22 0.00 0.00 0.00 7 23 0.00 0.00 0.00 12 24 0.00 0.00 0.00 19 25 0.00 0.00 0.00 31 26 0.00 0.00 0.00 8 27 0.00 0.00 0.00 4 28 0.00 0.00 0.00 10 29 0.00 0.00 0.00 4 30 0.00 0.00 0.00 12 31 0.00 0.00 0.00 13 32 0.00 0.00 0.00 10 33 0.00 0.00 0.00 5 34 0.00 0.00 0.00 7 35 0.00 0.00 0.00 6 36 0.00 0.00 0.00 11 37 0.00 0.00 0.00 2 38 0.00 0.00 0.00 3 39 0.00 0.00 0.00 5 40 0.00 0.00 0.00 10 41 0.00 0.00 0.00 8 42 0.00 0.00 0.00 3 43 0.00 0.00 0.00 6 44 0.00 0.00 0.00 5 45 0.00 0.00 0.00 1 accuracy 0.54 2246 macro avg 0.06 0.07 0.05 2246 weighted avg 0.52 0.54 0.52 2246
Apache-2.0
reuter_LSTM.ipynb
tecktonik08/test_deeplearning
비슷한 부분끼리 0임을 확인 -> words=10000 제한 때문에 0이 발생
_____no_output_____
Apache-2.0
reuter_LSTM.ipynb
tecktonik08/test_deeplearning
''' install 3dparty sheit ''' from IPython.display import clear_output as cle from pprint import pprint as print from PIL import Image import os import sys import json import IPython ''' default sample data delete ''' os.system('rm -r sample_data') ''' set root paths ''' root = '/content' gdrive_root = '/content/drive/My Drive' helpers_root = root + '/installed_repos/Python_Helpers' ''' setup install the Helpers module ''' os.system('git clone https://github.com/bxck75/Python_Helpers.git ' + helpers_root) os.system('python ' + helpers_root + 'setup.py install') ''' import helpers ''' os.chdir(helpers_root) import main as main_core MainCore = main_core.main() HelpCore = MainCore.Helpers_Core FScrape = HelpCore.flickr_scrape fromGdrive = HelpCore.GdriveD toGdrive = HelpCore.ZipUp.ZipUp cle() dir(HelpCore) FScrape(['Ork','Troll','Dragon'], 25, '/content/images') imgs_path_list = HelpCore.GlobX(img_dir,'*.*g') print(imgs_path_list) dir(HelpCore) print(MainCore.Helpers_Core.cloner('/content/images')) # def LandMarks(img_dir,out_dir): # ''' Folder glob and Landmark all found imgs ''' # imgs_path_list = HelpCore.GlobX(img_dir,'*.*g') # imgs_path_list.sort() # i=0 # for i in range(len(imgs_path_list)): # ''' make folders ''' # img_pathAr = imgs_path_list[i] # # img_pathAr.Split(Path.DirectorySeparatorChar) # returns array of the path # # img_pathAr.Lenth - 2 # # print(img_pathAr[2]) # os.system('mkdir -p '+os.path.join(out_dir,'/org')) # os.system('mkdir -p '+os.path.join(out_dir,'/marked')) # ''' backup original ''' # os.system('cp imgs_path_list[i] '+out_dir + '/org') # ''' loop over images ''' # img = cv.imread(imgs_path_list[i]) # for c, w, h in img.shape: # print(3, w, h) # if img is None: # print('Failed to load image file:', fname) # sys.exit(1) # fork_img = img # ''' Mark the image ''' # gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # lsd = cv.line_descriptor_LSDDetector.createLSDDetector() # lines = lsd.detect(gray, 1, 1) # for kl in lines: # if kl.octave == 0: # pt1 = (int(kl.startPointX), int(kl.startPointY)) # pt2 = (int(kl.endPointX), int(kl.endPointY)) # cv.line(fork_img, pt1, pt2, [255, 0, 0], 2) # cv.waitKey(0) # cv.destroyAllWindows() # cv.imwrite('nice.jpg',img) # # marked # cv.imwrite('nice.jpg',img) # i += 1 # def org_n_marked_clone(img_path,id=0): # ''' backup the originals ''' # drive, path_and_file = os.path.splitdrive(img_path) # path, file = os.path.split(path_and_file) # fi, ex = file.split(.) # fi.rstrip(string.digits) # ''' compose the new paths ''' # org_path = path + '/org/' + fi + '_%d' % id # marked_path = path + '/marked/' + fi + '_%d' % id # ''' return the list ''' # return [org_path,marked_path] # org_n_marked_clone('/content/images/img_1.jpg') os.path.join( path+'/org', file ) import sys import cv2 as cv if __name__ == '__main__': print(__doc__) fname = '/content/images/img_1.jpg' img = cv.imread(fname) if img is None: print('Failed to load image file:', fname) sys.exit(1) gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) lsd = cv.line_descriptor_LSDDetector.createLSDDetector() lines = lsd.detect(gray, 1, 1) for kl in lines: if kl.octave == 0: # cv.line only accepts integer coordinate pt1 = (int(kl.startPointX), int(kl.startPointY)) pt2 = (int(kl.endPointX), int(kl.endPointY)) cv.line(img, pt1, pt2, [255, 0, 0], 2) # plt.imshow('output', img) cv.waitKey(0) cv.destroyAllWindows() cv.imwrite('nice.jpg',img) from google.colab import drive drive.mount('/content/drive') help(HelpCore) help(FScrape) # search_list,img_dir,qty = ['zombie'], 'images', 21 # FScrape(search_list,qty,img_dir) funcs=[ 'BigHelp', 'Colab_root', 'ColorPrint', 'FileView', 'FlickrS', 'GdriveD', 'Gdrive_root', 'GlobX', 'GooScrape', 'ImgCrawler', 'ImgTools', 'LogGER', 'Logger', 'MethHelp', 'Ops', 'Repo_List', 'Resize', 'Sys_Cmd', 'Sys_Exec', 'ZipUp', ] def img_show_folder(folder): # fname = '/content/images/img_1.jpg' folder_path = Path(folder) GlobX(folder_path,'*.*g') for base, dirs, files in os.walk('/content/images'): files.sort() for i in range(len(files)): print(base+'/'+files[i]) img = cv.imread(base+'/'+files[i]) plt.imshow(img,cmap=dark2) plt.show() HelpCore.GlobX('/content/images','*.*g') # search_list, img_dir, qty = ['zombie'], 'images', 200 # FScrape(search_list, qty, img_dir) # toGdrive('cv2_samples','/content/drive/My Drive','/content/installed_repos/opencv/samples') # Load zipper # Zipper = toGdrive.GdriveD # Zip folder # images_set_name, gdrive_folder, folder_to_zip = 'cv2_samples', '/content/drive/My Drive', '/content/installed_repos/opencv/samples' # result=Zipper(images_set_name,gdrive_folder,folder_to_zip).ZipUp # Print Resulting hash print(result) dir(toGdrive) # HelpCore.GlobX('/content', '*.py') !python /content/installed_repos/opencv/samples/dnn/segmentation.py --zoo --input --framework 'tensorflow' %cd /content/installed_repos/opencv/samples/dnn !cat segmentation.py
_____no_output_____
BSD-2-Clause
latestv1.ipynb
bxck75/Python_Helpers
parser = argparse.ArgumentParser(add_help=False)parser.add_argument('--zoo', default=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'models.yml'), help='An optional path to file with preprocessing parameters.')parser.add_argument('--input', help='Path to input image or video file. Skip this argument to capture frames from a camera.')parser.add_argument('--framework', choices=['caffe', 'tensorflow', 'torch', 'darknet'], help='Optional name of an origin framework of the model. ' 'Detect it automatically if it does not set.')parser.add_argument('--colors', help='Optional path to a text file with colors for an every class. ' 'An every color is represented with three values from 0 to 255 in BGR channels order.')parser.add_argument('--backend', choices=backends, default=cv.dnn.DNN_BACKEND_DEFAULT, type=int, help="Choose one of computation backends: " "%d: automatically (by default), " "%d: Halide language (http://halide-lang.org/), " "%d: Intel's Deep Learning Inference Engine (https://software.intel.com/openvino-toolkit), " "%d: OpenCV implementation" % backends)parser.add_argument('--target', choices=targets, default=cv.dnn.DNN_TARGET_CPU, type=int, help='Choose one of target computation devices: ' '%d: CPU target (by default), ' '%d: OpenCL, ' '%d: OpenCL fp16 (half-float precision), ' '%d: VPU' % targets)args, _ = parser.parse_known_args()
# !wget https://drive.google.com/open?id=1KNfN-ktxbPJMtmdiL-I1WW0IO1B_2EG2 # landmarks_file=['1KNfN-ktxbPJMtmdiL-I1WW0IO1B_2EG2','/content/shape_predictor_68_face_landmarks.dat'] # fromGdrive.GdriveD(landmarks_file[0],landmarks_file[1]) import cv2 import numpy import dlib import matplotlib.pyplot as plt # cap = cv2.VideoCapture(0) detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor("/content/shape_predictor_68_face_landmarks.dat") while True: _, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector(gray) for face in faces: x1 = face.left() y1 = face.top() x2 = face.right() y2 = face.bottom() #cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 3) landmarks = predictor(gray, face) for n in range(0, 68): x = landmarks.part(n).x y = landmarks.part(n).y cv2.circle(frame, (x, y), 4, (255, 0, 0), -1) plt.imshow("Frame", frame) key = cv2.waitKey(1) if key == 27: break cap.release() cv2.destroyAllWindows() ''' install 3dparty sheit ''' from IPython.display import clear_output as cle from pprint import pprint as print from PIL import Image import os import sys import json import IPython ''' default sample data delete ''' os.system('rm -r sample_data') ''' set root paths ''' root = '/content' gdrive_root = '/content/drive/My Drive' helpers_root = root + '/installed_repos/Python_Helpers' ''' setup install the Helpers module ''' os.system('git clone https://github.com/bxck75/Python_Helpers.git ' + helpers_root) os.system('python ' + helpers_root + 'setup.py install') os.chdir(helpers_root) from main import main landmarks_68_file = '1KNfN-ktxbPJMtmdiL-I1WW0IO1B_2EG2' landmarks_194_file = '1fMOT_0f5clPbZXsphZyrGcLXkIhSDl3o' os.chdir(root) images_set_name, gdrive_folder, folder_to_zip = 'cv2_samples', '/content/installed_repos/opencv/samples/dnn/*', '/content/drive/My Drive' results=HelpCore.ZipUp.ZipUp(images_set_name,gdrive_folder,folder_to_zip).ZipUp print(results) !zip -r cv2_examples.zip /content/installed_repos/opencv/samples/dnn /content/installed_repos/opencv/samples/python
_____no_output_____
BSD-2-Clause
latestv1.ipynb
bxck75/Python_Helpers
Dieser Block erzeugt eine Darstellung, die die ganze Breite des Fensters ausnutzt.
%%HTML <style>.container{width:100%;}</style>
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Dieser Block schaltet die Überprüfung auf Schönheitsfehler ein. Wir ignorieren die Konvention, dass zwischen Definitionen zwei Leerzeilen stehen sollen, um die Lesbarkeit zu erhöhen. Wir erlauben auch mehrere Leerzeichen vor einem Operator, um sequenzielle Zuweisungen am `=` ausrichten zu können.
%load_ext pycodestyle_magic %flake8_on --ignore E302,E305,E221
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir benötigen außerdem *Graphviz* zur Visualisierung.
import graphviz
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Splay TreesIn diesem Notebook wird eine bestimmte Art von selbstbalancierenden Bäumen gezeigt, die *Splay Trees*. Diese Datenstruktur wurde [1985 von Sleator und Tarjan eingeführt](http://www.cs.cmu.edu/~sleator/papers/self-adjusting.pdf "D. D. Sleator, R. E. Tarjan (1985): Self-Adjusting Binary Search Trees. Journal of the ACM, 32(3) 652-686"). Im Gegensatz zu anderen selbstbalancierenden Bäumen wie AVL-Bäumen wird bei Splay Trees nicht gefordert, dass der Baum zu allen Zeiten so gut wie möglich balanciert ist. Stattdessen wird der Baum dahingehend optimiert, dass häufig verwendete Elemente nahe an der Wurzel sind.Auf Basis dieser Bäume soll später eine alternative Implementierung von Mengen in der Programmiersprache Python entstehen. In der Referenzimplementierung *CPython* sind [Mengen auf Basis von *Hashtabellen* implementiert](https://github.com/python/cpython/blob/3.7/Objects/setobject.c "R. D. Hettinger et al. (2019): cpython/Objects/setobject.c, GitHub"). Viele andere weit verbreitete Implementierungen anderer Programmiersprachen benutzen für Mengen ebenfalls Hashtabellen oder haben dies zumindest als Option, wie in [Java](https://docs.oracle.com/en/java/javase/13/docs/api/java.base/java/util/Set.html "Oracle Corporation (2019): Set (Java SE 13 & JDK 13)"), [.NET (C)](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.iset-1?view=netframework-4.8, "Microsoft Corporation (2020): ISet Interface (System.Collections.Generic), Microsoft Docs"), [JavaScript](https://v8.dev/blog/hash-code "S. Gunasekaran (2018): Optimizing hash tables: hiding the hash code, V8 Blog") oder [PHP](https://www.php.net/manual/en/class.ds-set.php "The PHP Group (2020): PHP: Set, Manual"). Jedoch sind mit der Verwendung von Bäumen für Mengen einige mengenlastige Programmierprobleme einfacher zu lösen, da diese Mengen *geordnet* sind und somit beispielsweise ein einfach zu bestimmendes Minimum und ein Maximum haben.Geordnete binäre Bäume eignen sich, um Mengen zu implementieren, da insbesondere in beiden keine doppelten Elemente vorkommen. Wir müssen für geordnete Mengen außerdem fordern, dass alle Elemente geordnet werden können. Wir behandeln später, wie wir beliebigen Nutzlasten in Python eine Ordnung geben können.Die reine Datenstruktur – ohne Operationen – ist bei Splay Trees genauso wie bei regulären geordneten binären Bäumen definiert: $\mathrm{Node}(p, l, r)$ ist ein Baum, wobei- $p$ eine Nutzlast (payload) ist,- $l$ der linke Teilbaum ist und- $r$ der rechte Teilbaum ist.
class Node: def __init__(self, payload, left, right): self.payload = payload self.left = left self.right = right
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir verlangen dabei:- Für alle Nutzlasten aus dem linken Teilbaum $l$ gilt, dass sie kleiner als die Nutzlast $p$ sind.- Für alle Nutzlasten aus dem rechten Teilbaum $r$ gilt, dass sie größer als die Nutzlast $p$ sind.Diese Aussagen können auch als $l < p < r$ formuliert werden. VisualisierungWir wollen, um die Splay Trees besser erklären zu können, zuerst ihre Visualisierung implementieren. Bäume, die eine Untermenge der Graphen sind, können wir mit dem [Python-Interface](https://github.com/xflr6/graphviz "S. Bank (2019): graphviz, GitHub") zu [*Graphviz*](https://graphviz.org/ "J. Ellson et al. (2019): Graphviz") visualisieren.Wir definieren dazu zuerst die interne Methode `_graph` der Klasse `Node`, die einem bestehenden Graphen `dot` die eigene Struktur hinzufügt. Wir müssen uns dabei merken, welche Namen wir schon für Knoten im Graph benutzt haben, wofür `_graph` noch die Menge `used` übergeben bekommt. Als Namen benutzen wir immer die `id` des Knotens, außer, wenn wir leere Knoten markieren, für die wir kein `Node`-Objekt halten. In diesem Fall benutzen wir einen Zähler `key`, den `_graph` ebenfalls übergeben bekommt. Den geänderten Zähler gibt `_graph` zurück, `used` brauchen wir nicht zurückgeben, da Mengen per Referenz weitergegeben werden.
def _graph(self, dot, used, key): used.add(id(self)) dot.node(str(id(self)), label=str(self.payload)) if not (self.left is None and self.right is None): for node in self.left, self.right: if node is not None: dot.edge(str(id(self)), str(id(node))) key = node._graph(dot, used, key) else: while True: key += 1 if key not in used: break used.add(key) dot.node(str(key), shape="point") dot.edge(str(id(self)), str(key)) return key Node._graph = _graph del _graph
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Die nach außen offenstehende Methode `graph` (ohne Unterstrich) der Klasse `Node` leistet nur die Vorarbeit für `_graph` und ruft diese Methode dann auf. `graph` unterstützt allerdings beliebig viele `additionals`, also `Node`s, die in dieser Reihenfolge ebenfalls in den Graphen eingefügt werden. So können wir mehrere Bäume in einem Schaubild sehen und insbesondere Schritte besser nachvollziehen.
def graph(self, *additionals): dot = graphviz.Digraph() used = set() key = self._graph(dot, used, 0) for el in additionals: key = el._graph(dot, used, key) return dot Node.graph = graph del graph
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Um bei solchen Schritten anmerken zu können, was geschieht, definieren wir die Klasse `Method`, die sich auch mit `_graph` visualisieren lassen kann. Solche Knoten werden dann als Rechtecke angezeigt.
class Method: def __init__(self, name): self.name = name def _graph(self, dot, used, key): used.add(id(self)) dot.node(str(id(self)), label=str(self.name) + " ⇒", shape="rectangle", style="dotted") return key
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Zuletzt definieren wir `TempTree`, dessen Verbindung zu seinem einzigen (links oder rechts sitzenden) Kind an der Seite statt unten sitzt und gestrichelt gezeichnet wird. Der Baum selbst wird als Dreieck gezeichnet. Wir werden damit später Bäume visualisieren, bei denen wir nur die äußersten Knoten zeichnen und dazwischen Knoten in der Darstellung auslassen.
class TempTree: def __init__(self, name, child, left): self.name = name self.child = child self.left = left def _graph(self, dot, used, key): used.add(id(self)) dot.node(str(id(self)), label=str(self.name), shape="triangle") if self.child is not None: dot.edge(str(id(self)), str(id(self.child)), style="dashed", tailport="w" if self.left else "e") key = self.child._graph(dot, used, key) else: while True: key += 1 if key not in used: break used.add(key) dot.node(str(key), shape="point") dot.edge(str(id(self)), str(key), style="dashed", tailport="w" if self.left else "e") return key TempTree.graph = Node.graph
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
SplayingDie Besonderheit von Splay Trees ist, dass mit allen Baumoperationen, die ein Element im Baum lokalisieren, eine besondere Operation, der *Splay*, durchgeführt wird. Mit Baumoperationen, die ein Element im Baum lokalisieren, sind alle Operationen auf den Baum gemeint, die den Baum auf der Suche nach einem Element oder auf der Suche nach dem richtigen Ort für ein Element durchsuchen. Dazu gehören das Einfügen, Löschen und Finden von Elementen.Der Splay ist eine Funktion, die einen Baum dahingehend modifiziert, dass eine Nutzlast, die schon im Baum enthalten ist, die neue Wurzel des Baums wird.$$\mathrm{Node}.\mathrm{splay}: \mathrm{Payload} \to \mathrm{Node}$$Ist die angegebene Nutzlast im Baum, so ist gerade der Knoten, der sie enthält, die neue Wurzel.Wir arbeiten mit dem *Top-Down*-Ansatz, bei dem wir von der Wurzel aus so lange Knoten beiseite legen, bis der zu splayende Knoten die Wurzel darstellt, und diese beiseite gelegten Knoten wieder unterordnen. Dieser Ansatz wurde in der ursprünglichen Veröffentlichung von Sleator und Tarjan bereits beschrieben (S. 667ff.), aber erst [1987 von Mäkinen als in Komplexität etwas begrenzter analysiert](https://link.springer.com/article/10.1007%2FBF01933728 "E. Mäkinen (1987): On top-down splaying. BIT Numerical Mathematics, 27 330-339 (SpringerLink)").Beim Top-Down-Splaying werden zusätzlich zum Baum, der bearbeitet wird, die zwei Bäume $L, R$ betrachtet. Wenn Knoten beiseite gelegt werden, so werden sie in $L$ und $R$ abgelegt. In $L$ kommen die Elemente, die kleiner als der zu splayende Knoten sind, in $R$ die, die größer sind. Dabei bleiben der rechteste Knoten von $L$ und der linkeste Knoten von $R$ immer frei, sodass dort leicht angefügt werden kann. Wir definieren dafür zunächst $\mathrm{insert\_left}$ und $\mathrm{insert\_right}$, die keine neuen Nutzlasten, sondern existierende Knoten ganz links (für $R$) bzw. rechts (für $L$) außen an den Baum anfügen. Dabei ist der Baum selbst der erste Parameter, der neue Knoten der zweite. Die formale Definition ist rekursiv.$$\mathrm{insert\_left}: \mathrm{Node} \times \mathrm{Node} \to \mathrm{Node}$$$$\begin{aligned}\mathrm{Nil}.\mathrm{insert\_left}(\mathrm{Node}(a, b, c)) &= \mathrm{Node}(a, b, c) \\\mathrm{Node}(x, y, z).\mathrm{insert\_left}(\mathrm{Nil}) &= \mathrm{Node}(x, y, z) \\\mathrm{Node}(x, y, z).\mathrm{insert\_left}(\mathrm{Node}(a, b, c)) &= \mathrm{Node}(x, y.\mathrm{insert\_left}(\mathrm{Node}(a, b, c)), z)\end{aligned}$$$$\mathrm{insert\_right}: \mathrm{Node} \times \mathrm{Node} \to \mathrm{Node}$$$$\begin{aligned}\mathrm{Nil}.\mathrm{insert\_right}(\mathrm{Node}(a, b, c)) &= \mathrm{Node}(a, b, c) \\\mathrm{Node}(x, y, z).\mathrm{insert\_right}(\mathrm{Nil}) &= \mathrm{Node}(x, y, z) \\\mathrm{Node}(x, y, z).\mathrm{insert\_right}(\mathrm{Node}(a, b, c)) &= \mathrm{Node}(x, y, z.\mathrm{insert\_right}(\mathrm{Node}(a, b, c)))\end{aligned}$$Um das Splayen im Kontext dieser temporären Bäume definieren zu können, definieren wir die Funktion $\mathrm{splay\_step}$, die einen Splay-Schritt für eine Nutzlast durchführt, und dabei auf einem Tripel aus $L$, dem Baum selbst, und $R$ operiert, und dieses Tripel i. A. modifiziert zurückgibt.$$\langle\mathrm{Node}, \mathrm{Node}, \mathrm{Node}\rangle.\mathrm{splay\_step}: \mathrm{Payload} \to \langle\mathrm{Node}, \mathrm{Node}, \mathrm{Node}\rangle$$Die $\mathrm{splay}$-Funktion muss nur leere Bäume $L, R$ konstruieren und den ersten $\mathrm{splay\_step}$ starten. Die übrigen Schritte werden von dort aus rekursiv aufgerufen. Sind alle $\mathrm{splay\_step}$s getan, gibt sie den mittleren Baum zurück.$$\langle\mathrm{Nil}, \mathrm{Node}(x, y, z), \mathrm{Nil}\rangle.\mathrm{splay\_step}(k) = \langle\mathrm{Nil}, \mathrm{Node}(a, b, c), \mathrm{Nil}\rangle \Rightarrow \mathrm{Node}(x, y, z).\mathrm{splay}(k) = \mathrm{Node}(a, b, c)$$In jedem Schritt werden zwei Knoten beiseite gelegt. Sollte der Knoten mit der gesuchten Nutzlast auf der zweitobersten Ebene sein, so wird natürlich nur noch ein Knoten beiseite gelegt. Diesen Schritt behandeln wir zuerst. Ist der Knoten dabei das linke Kind seines Elternknotens, so bezeichnen wir diesen Schritt als *Zig*. Zig und ZagDie folgende Grafik illustriert den Schritt. Die Dreiecke $L, R$ stehen hier für ganze Bäume, und die ausgehenden Pfeile stehen für die rechteste bzw. linkeste Stelle in diesen Bäumen.
# flake8: noqa # blocks solely for drawing are exempt because I consider # semicolons and longer lines more beneficial to the reading flow there x1 = Node("x", None, None); y1 = Node("y", None, None); z1 = Node("z", None, None) a1 = Node("a", x1, y1); b1 = Node("b", a1, z1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) x2 = Node("x", None, None); y2 = Node("y", None, None); z2 = Node("z", None, None) a2 = Node("a", x2, y2); b2 = Node("b", None, z2) l2 = TempTree("L", None, False); r2 = TempTree("R", b2, True) dot = l1.graph(b1, r1, Method("zig()"), l2, a2, r2) for subtree in x1, y1, z1, x2, y2, z2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir bewegen den Knoten $a$ an die Wurzel, wobei $b$ gemeinsam mit seinem rechten Teilbaum $z$ links unten an $R$ angefügt wird. Die Ordnung bleibt erhalten, da $b > a$ ist, und da alle Elemente, die zuvor schon in $R$ waren, auch größer als $b$ sind.Formal definieren wir$$\langle L, \mathrm{Node}(b, \mathrm{Node}(a, x, y), z), R\rangle.\mathrm{splay\_step}(a) = \langle L, \mathrm{Node}(a, x, y), R.\mathrm{insert\_left}(\mathrm{Node}(b, \mathrm{Nil}, z)\rangle.\mathrm{splay\_step}(a)$$Zu beachten ist, dass wir dabei auf das Ergebnis wieder einen Splayschritt durchführen. Wir haben zwar jetzt schon die gesuchte Nutzlast an der Wurzel, wollen aber das, was nach dem letzten regulären Schritt zu tun ist, nur einmal definieren. Hinzu kommt, dass dies für die anderen Optionen, Splay-Schritte durchzuführen, im Allgemeinen nicht der Fall ist, dann wird der Knoten noch nicht an der Wurzel sein. In der Implementierung wird später immerzu überprüft werden, ob noch ein Schritt benötigt wird, und dann der richtige Schritt ausgeführt.Die Implementierung in Python handhaben wir ein wenig anders. Die als intern markierte Methode `_zig` bekommt Zeiger auf die momentanen Extrema in $L$ und $R$, schreibt das Objekt selbst in $R$ fest, und überschreibt die Referenz auf sich selbst dann mit dem linken Knoten unter sich. So vermeiden wir das teure Konstruieren neuer Objekte. Die neuen Extrema sowie der neue Zeiger auf den betrachteten Knoten werden zurückgegeben (erst $L$, dann der Hauptbaum, dann $R$).
def _zig(self, max_less, min_greater): # self = Node(b, Node(a, x, y), z) min_greater.left = self new_min_greater = self # new_min_greater = Node(b, Node(a, x, y), z) new_self = self.left # new_self = Node(a, x, y) new_min_greater.left = None # new_min_greater = Node(b, Nil, z) return max_less, new_self, new_min_greater Node._zig = _zig del _zig
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Bei *Zag* ist der zu splayende Knoten das rechte Kind der Wurzel:
# flake8: noqa x1 = Node("x", None, None); y1 = Node("y", None, None); z1 = Node("z", None, None) a1 = Node("a", y1, z1); b1 = Node("b", x1, a1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) x2 = Node("x", None, None); y2 = Node("y", None, None); z2 = Node("z", None, None) a2 = Node("a", y2, z2); b2 = Node("b", x2, None) l2 = TempTree("L", b2, False); r2 = TempTree("R", None, True) dot = l1.graph(b1, r1, Method("zag()"), l2, a2, r2) for subtree in x1, y1, z1, x2, y2, z2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Formale Definition und Implementierung sind ähnlich.$$\langle L, \mathrm{Node}(b, x, \mathrm{Node}(a, y, z)), R\rangle.\mathrm{splay\_step}(a) = \langle L.\mathrm{insert\_right}(\mathrm{Node}(b, x, \mathrm{Nil})), \mathrm{Node}(a, y, z), R\rangle.\mathrm{splay\_step}(a)$$
def _zag(self, max_less, min_greater): # self = Node(b, x, Node(a, y, z)) max_less.right = self new_max_less = self # new_min_greater = Node(b, x, Node(a, y, z)) new_self = self.right # new_self = Node(a, y, z) new_max_less.right = None # new_min_greater = Node(b, x, Nil) return new_max_less, new_self, min_greater Node._zag = _zag del _zag
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Zig-Zig und Zag-ZagAls nächstes behandeln wir den Fall, dass der Knoten wenigstens zwei Ebenen von der Wurzel entfernt ist, und sowohl der Knoten als auch sein Elternknoten ein linkes Kind sind. Die Operation, die auf diese Ausgangssituation anzuwenden ist, bezeichnen wir als *Zig-Zig*. Diese Operation sieht so aus:
# flake8: noqa w1 = Node("w", None, None); x1 = Node("x", None, None); y1 = Node("y", None, None); z1 = Node("z", None, None) a1 = Node("a", w1, x1); b1 = Node("b", a1, y1); c1 = Node("c", b1, z1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) w2 = Node("w", None, None); x2 = Node("x", None, None); y2 = Node("y", None, None); z2 = Node("z", None, None) a2 = Node("a", w2, x2); c2 = Node("c", y2, z2); b2 = Node("b", None, c2) l2 = TempTree("L", None, False); r2 = TempTree("R", b2, True) dot = l1.graph(c1, r1, Method("zig_zig()"), l2, a2, r2) for subtree in w1, x1, y1, z1, w2, x2, y2, z2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir bewegen den Knoten $a$ an die Wurzel. $b$ wird mit $c$ als rechtes Kind an $R$ angefügt, wobei $c$ als seinen linken Teilbaum $y$ hat. Die Ordnungsbedingung bleibt erhalten, da $c > b > a$ und $y > c > b$ ist.Wir definieren$$k < b \Rightarrow \langle L, \mathrm{Node}(c, \mathrm{Node}(b, \mathrm{Node}(a, w, x), y), z), R\rangle.\mathrm{splay\_step}(k) =$$$$= \langle L, \mathrm{Node}(a, w, x), R.\mathrm{insert\_left}(\mathrm{Node}(b, \mathrm{Node}(c, y, z)))\rangle.\mathrm{splay\_step}(k)$$Wir schreiben in der Implementierung wieder nur die nötigen Referenzen um. Dies sind einfach einige zusätzliche Schritte im Vergleich zu `_zig` und `_zag`.
def _zig_zig(self, max_less, min_greater): # self = Node(c, Node(b, Node(a, w, x), y), z) min_greater.left = self.left new_min_greater = self.left # new min_greater = Node(b, Node(a, w, x), y) new_self = new_min_greater.left # new_self = Node(a, w, x) self.left = new_min_greater.right # self = Node(c, y, z) new_min_greater.left = None # new_min_greater = Node(b, Nil, y) new_min_greater.right = self # new_min_greater = Node(b, Nil, Node(c, y, z)) return max_less, new_self, new_min_greater Node._zig_zig = _zig_zig del _zig_zig
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir bezeichnen die gleiche Situation mit rechtem Kind und rechtem Enkel als *Zag-Zag*:
# flake8: noqa w1 = Node("w", None, None); x1 = Node("x", None, None); y1 = Node("y", None, None); z1 = Node("z", None, None) a1 = Node("a", y1, z1); b1 = Node("b", x1, a1); c1 = Node("c", w1, b1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) w2 = Node("w", None, None); x2 = Node("x", None, None); y2 = Node("y", None, None); z2 = Node("z", None, None) a2 = Node("a", y2, z2); c2 = Node("c", w2, x2); b2 = Node("b", c2, None) l2 = TempTree("L", b2, False); r2 = TempTree("R", None, True) dot = l1.graph(c1, r1, Method("zag_zag()"), l2, a2, r2) for subtree in w1, x1, y1, z1, w2, x2, y2, z2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Der Splay-Schritt ist ähnlich definiert und implementiert.$$k > b \Rightarrow \langle L, \mathrm{Node}(c, w, \mathrm{Node}(b, x, \mathrm{Node}(a, y, z))), R\rangle.\mathrm{splay\_step}(k) = $$$$= \langle L.\mathrm{insert\_right}(\mathrm{Node}(b, \mathrm{Node}(c, w, x), \mathrm{Nil})), \mathrm{Node}(a, y, z), R\rangle.\mathrm{splay\_step}(k)$$
def _zag_zag(self, max_less, min_greater): # self = Node(c, w, Node(b, x, Node(a, y, z))) max_less.right = self.right new_max_less = self.right # new_max_less = Node(b, x, Node(a, y, z)) new_self = new_max_less.right # new_self = Node(a, y, z) self.right = new_max_less.left # self = Node(c, w, x) new_max_less.right = None # new_max_less = Node(b, x, Nil) new_max_less.left = self # new_max_less = Node(b, Node(c, w, x), Nil) return new_max_less, new_self, min_greater Node._zag_zag = _zag_zag del _zag_zag
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Zig-Zag und Zag-ZigZuletzt behandeln wir den Fall, dass der Elternknoten ein linkes Kind, der Knoten selbst aber ein rechtes Kind ist. Die Operation auf diese Situation nennen wir *Zig-Zag*. Für diese Operation brauchen wir erstmalig sowohl $L$ als auch $R$:
# flake8: noqa w1 = Node("w", None, None); x1 = Node("x", None, None) y1 = Node("y", None, None); z1 = Node("z", None, None) a1 = Node("a", x1, y1); b1 = Node("b", w1, a1); c1 = Node("c", b1, z1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) w2 = Node("w", None, None); x2 = Node("x", None, None) y2 = Node("y", None, None); z2 = Node("z", None, None) a2 = Node("a", x2, y2); b2 = Node("b", w2, None); c2 = Node("c", None, z2) l2 = TempTree("L", b2, False); r2 = TempTree("R", c2, True) dot = l1.graph(c1, r1, Method("zig_zag()"), l2, a2, r2) for subtree in w1, x1, y1, z1, w2, x2, y2, z2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir bewegen den Knoten $a$ an die Stelle von $c$, dabei werden $b$ und $c$ respektive an $L$ und $R$ angefügt, die ja jeweils kleiner bzw. größer als $a$ sind.Wir definieren$$b < k < c \Rightarrow \langle L, \mathrm{Node}(c, \mathrm{Node}(b, w, \mathrm{Node}(a, x, y)), z), R\rangle.\mathrm{splay\_step}(k) =$$$$= \langle L.\mathrm{insert\_right}(\mathrm{Node}(b, w, \mathrm{Nil})), \mathrm{Node}(a, x, y), R.\mathrm{insert\_left}(\mathrm{Node}(c, \mathrm{Nil}, z))\rangle.\mathrm{splay\_step}(k)$$In Code haben wir
def _zig_zag(self, max_less, min_greater): # self = Node(c, Node(b, w, Node(a, x, y)), z) max_less.right = self.left new_max_less = self.left # new_max_less = Node(b, w, Node(a, x, y)) min_greater.left = self new_min_greater = self # new_min_greater = Node(c, Node(b, w, Node(a, x, y)), z) new_self = self.left.right # new_self = Node(a, x, y) new_max_less.right = None # new_max_less = Node(b, w, Nil) new_min_greater.left = None # new_min_greater = Node(c, Nil, z) return new_max_less, new_self, new_min_greater Node._zig_zag = _zig_zag del _zig_zag
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
In umgekehrter Reihenfolge, d.h. der Elternknoten ist das rechte Kind, der Knoten das linke Kind, haben wir die Operation *Zag-Zig* mit
# flake8: noqa w1 = Node("w", None, None); x1 = Node("x", None, None) y1 = Node("y", None, None); z1 = Node("z", None, None) a1 = Node("a", x1, y1); b1 = Node("b", a1, z1); c1 = Node("c", w1, b1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) w2 = Node("w", None, None); x2 = Node("x", None, None) y2 = Node("y", None, None); z2 = Node("z", None, None) a2 = Node("a", x2, y2); b2 = Node("b", None, z2); c2 = Node("c", w2, None) l2 = TempTree("L", c2, False); r2 = TempTree("R", b2, True) dot = l1.graph(c1, r1, Method("zag_zig()"), l2, a2, r2) for subtree in w1, x1, y1, z1, w2, x2, y2, z2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Formal schreiben wir$$c < k < b \Rightarrow \langle L, \mathrm{Node}(c, w, \mathrm{Node}(b, \mathrm{Node}(a, x, y), z)), R\rangle.\mathrm{splay\_step}(k) =$$$$= \langle L.\mathrm{insert\_right}(\mathrm{Node}(c, w, \mathrm{Nil})), \mathrm{Node}(a, x, y), R.\mathrm{insert\_left}(\mathrm{Node}(b, \mathrm{Nil}, z))\rangle.\mathrm{splay\_step}(k)$$und in Code
def _zag_zig(self, max_less, min_greater): # self = Node(c, w, Node(b, Node(a, x, y), z)) max_less.right = self new_max_less = self # new_max_less = Node(c, w, Node(b, Node(a, x, y), z)) min_greater.left = self.right new_min_greater = self.right # new_min_greater = Node(b, Node(a, x, y), z) new_self = self.right.left # new_self = Node(a, x, y) new_max_less.right = None # new_max_less = Node(c, w, Nil) new_min_greater.left = None # new_min_greater = Node(b, Nil, z) return new_max_less, new_self, new_min_greater Node._zag_zig = _zag_zig del _zag_zig
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Vergleich beliebiger NutzlastenWir müssen, um später Mengen, die Elemente unterschiedlicher Typen enthalten, zu unterstützen, dazu in der Lage sein, beliebige Nutzlasten zu vergleichen. Dazu versuchen wir den direkten Vergleich, und vergleichen, wenn das nicht funktioniert, am Klassennamen. So kommen wertgleiche Fließkomma- und Ganzzahlen nur einmal in die Menge (dies ist auch bei Python-Mengen so). Die Methoden, die wir dazu schreiben, geben wir der Klasse `Node` mit, um so wenig wie möglich in den Namespace zu laden, und somit die Einsetzbarkeit zu erhöhen. `_arb_gt(x, y)` überprüft, ob im Sinne dieses Vergleichs `x` $>$ `y` ist.
def _arb_gt(self, x, y): try: return y < x except TypeError: return type(x).__name__ > type(y).__name__ Node._arb_gt = _arb_gt del _arb_gt
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
`_arb_lt(x, y)` überprüft `x` $<$ `y`…
def _arb_lt(self, x, y): try: return x < y except TypeError: return type(x).__name__ < type(y).__name__ Node._arb_lt = _arb_lt del _arb_lt
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
…und `_arb_eq(x, y)` überprüft `x == y`.
def _arb_eq(self, x, y): try: return x == y except TypeError: return False Node._arb_eq = _arb_eq del _arb_eq
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Verkettung der SchritteDer formalen Definition fehlt noch der Basisfall, wenn die Wurzel die zu splayende Nutzlast enthält. Wir fügen dann $L$ und $R$ an die neue Wurzel an – die Knoten in $L$ und $R$ waren ja gerade eben kleiner bzw. größer als die Wurzel. Wir fügen dabei die Teilbäume, die die Wurzel jetzt noch hat, in $L$ und $R$ ein; links von der Wurzel sind alle Nutzlasten, die größer als $L$ sind, und rechts sind alle, die kleiner als $R$ sind.
# flake8: noqa x1 = Node("x", None, None); y1 = Node("y", None, None); a1 = Node("a", x1, y1) l1 = TempTree("L", None, False); r1 = TempTree("R", None, True) x2 = Node("x", None, None); y2 = Node("y", None, None) l2 = TempTree("L", x2, False); r2 = TempTree("R", y2, True) a2 = Node("a", l2, r2) dot = l1.graph(a1, r1, Method("finish()"), a2) for subtree in x1, y1, x2, y2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir definieren:$$\langle L, \mathrm{Node}(a, x, y), R\rangle.\mathrm{splay\_step}(a) = \langle\mathrm{Nil}, \mathrm{Node}(a, L.\mathrm{insert\_right}(x), R.\mathrm{insert\_left}(y)), \mathrm{Nil}\rangle$$Wir müssen allerdings auch den Basisfall betrachten, dass wir feststellen, dass der Knoten nicht vorhanden ist, weil da, wo der Knoten stünde, keine Knoten mehr sind. In diesem Fall müssen wir nur noch den anderen Teilbaum anfügen.$$\begin{aligned}k < a &\Rightarrow \langle L, \mathrm{Node}(a, \mathrm{Nil}, y), R\rangle.\mathrm{splay\_step}(k) = \langle\mathrm{Nil}, \mathrm{Node}(a, L, R.\mathrm{insert\_left}(y)), \mathrm{Nil}\rangle \\k > a &\Rightarrow \langle L, \mathrm{Node}(a, x, \mathrm{Nil}), R\rangle.\mathrm{splay\_step}(k) = \langle\mathrm{Nil}, \mathrm{Node}(a, L.\mathrm{insert\_right}(x), R), \mathrm{Nil}\rangle\end{aligned}$$In der Implementierung wählen wir so lange den angemessenen Schritt aus und führen ihn durch, bis wir an den gesuchten Knoten kommen oder erfolglos an ein Blatt stoßen. So vermeiden wir Funktionsaufrufe und die Grenzen von Rekursionstiefe. Falls der gesuchte Knoten noch genau eine Ebene entfernt ist, so führen wir Zig bzw. Zag durch. In diesem Fall unterscheiden wir:|Linkes Kind|Rechtes Kind||:----------|:-----------||Zig |Zag |Falls er noch mindestens zwei Ebenen entfernt ist, so unterscheiden wir:|Kind vs. Enkel |Linkes Kind|Rechtes Kind||----------------:|:----------|:-----------||**Linker Enkel** |Zig-Zig |Zag-Zig ||**Rechter Enkel**|Zig-Zag |Zag-Zag |Um diese Implementierung später besser mit anderen rekursiv implementierten Datenstrukturen vergleichen zu können, implementieren wir den `_splay_step` jetzt auch rekursiv:
def _splay_step(self, max_less, min_greater, payload): if self._arb_lt(payload, self.payload): if self.left is None: return max_less, self, min_greater if self._arb_lt(payload, self.left.payload) \ and self.left.left is not None: new_max_less, new_self, new_min_greater = \ self._zig_zig(max_less, min_greater) return new_self._splay_step(new_max_less, new_min_greater, payload) if self._arb_gt(payload, self.left.payload) \ and self.left.right is not None: new_max_less, new_self, new_min_greater = \ self._zig_zag(max_less, min_greater) return new_self._splay_step(new_max_less, new_min_greater, payload) return self._zig(max_less, min_greater) if self._arb_gt(payload, self.payload): if self.right is None: return max_less, self, min_greater if self._arb_gt(payload, self.right.payload) \ and self.right.right is not None: new_max_less, new_self, new_min_greater = \ self._zag_zag(max_less, min_greater) return new_self._splay_step(new_max_less, new_min_greater, payload) if self._arb_lt(payload, self.right.payload) \ and self.right.left is not None: new_max_less, new_self, new_min_greater = \ self._zag_zig(max_less, min_greater) return new_self._splay_step(new_max_less, new_min_greater, payload) return self._zag(max_less, min_greater) return max_less, self, min_greater Node._splay_step = _splay_step del _splay_step
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
In der Zusammensetzung `_splay` fügen wir noch die Teilbäume von der neuen Wurzel an $L, R$ an und setzen $L, R$ als Teilbäume dieser neuen Wurzel. Wir benutzen statt $L, R$ nur einen Baum `set_aside`, bei dem wir auf der entsprechenden Seite anfügen.
def _splay(self, payload): set_aside = Node(None, None, None) max_less, new_self, min_greater = \ self._splay_step(set_aside, set_aside, payload) max_less.right = new_self.left min_greater.left = new_self.right new_self.left = set_aside.right new_self.right = set_aside.left return new_self Node._splay = _splay del _splay
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Das nächste Beispiel zeigt, wie ein Splay aus einem schlechtestmöglich balancierten Baum einen deutlich besser balancierten Baum machen kann:
# flake8: noqa a1 = Node("a", None, None); c1 = Node("c", None, None); e1 = Node("e", None, None); g1 = Node("g", None, None) j1 = Node("j", None, None); l1 = Node("l", None, None); n1 = Node("n", None, None); p1 = Node("p", None, None) b1 = Node("b", a1, c1); d1 = Node("d", b1, e1); f1 = Node("f", d1, g1); h1 = Node("h", f1, j1) k1 = Node("k", h1, l1); m1 = Node("m", k1, n1); o1 = Node("o", m1, p1) a2 = Node("a", None, None); c2 = Node("c", None, None); e2 = Node("e", None, None); g2 = Node("g", None, None) j2 = Node("j", None, None); l2 = Node("l", None, None); n2 = Node("n", None, None); p2 = Node("p", None, None) b2 = Node("b", a2, c2); d2 = Node("d", b2, e2); f2 = Node("f", d2, g2); h2 = Node("h", f2, j2) k2 = Node("k", h2, l2); m2 = Node("m", k2, n2); o2 = Node("o", m2, p2) dot = o1.graph(Method("splay()"), o2._splay("b")) for subtree in a1, c1, e1, g1, j1, l1, n1, p1, a2, c2, e2, g2, j2, l2, n2, p2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Ein anderes Beispiel enthält alle Schritte außer Zag:
# flake8: noqa j1 = Node("j", None, None); l1 = Node("l", None, None); n1 = Node("n", None, None); p1 = Node("p", None, None) g1 = Node("g", None, None); e1 = Node("e", None, None); r1 = Node("r", None, None); c1 = Node("c", None, None) a1 = Node("a", None, None); t1 = Node("t", None, None); v1 = Node("v", None, None) k1 = Node("k", j1, l1); m1 = Node("m", k1, n1); o1 = Node("o", m1, p1); h1 = Node("h", g1, o1) f1 = Node("f", e1, h1); q1 = Node("q", f1, r1); d1 = Node("d", c1, q1); b1 = Node("b", a1, d1) s1 = Node("s", b1, t1); u1 = Node("u", s1, v1) j2 = Node("j", None, None); l2 = Node("l", None, None); n2 = Node("n", None, None); p2 = Node("p", None, None) g2 = Node("g", None, None); e2 = Node("e", None, None); r2 = Node("r", None, None); c2 = Node("c", None, None) a2 = Node("a", None, None); t2 = Node("t", None, None); v2 = Node("v", None, None) k2 = Node("k", j2, l2); m2 = Node("m", k2, n2); o2 = Node("o", m2, p2); h2 = Node("h", g2, o2) f2 = Node("f", e2, h2); q2 = Node("q", f2, r2); d2 = Node("d", c2, q2); b2 = Node("b", a2, d2) s2 = Node("s", b2, t2); u2 = Node("u", s2, v2) dot = u1.graph(Method("splay()"), u2._splay("k")) for subtree in a1, c1, e1, g1, j1, l1, n1, p1, r1, t1, v1, a2, c2, e2, g2, j2, l2, n2, p2, r2, t2, u2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir betrachten zuletzt das folgende Beispiel, um Zag abgedeckt zu haben:
# flake8: noqa a1 = Node("a", None, None); c1 = Node("c", None, None); e1 = Node("e", None, None) d1 = Node("d", c1, e1); b1 = Node("b", a1, d1) a2 = Node("a", None, None); c2 = Node("c", None, None); e2 = Node("e", None, None) d2 = Node("d", c2, e2); b2 = Node("b", a2, d2) dot = b1.graph(Method("splay()"), b2._splay("d")) for subtree in a1, c1, e1, a2, c2, e2: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
StandardoperationenWir definieren für den Splay Tree als nächstes die grundlegenden Operationen auf Bäume: Einfügen, Löschen, auf das Vorhandensein eines Elements überprüfen, und auf das Leersein überprüfen. EinfügenWir definieren das Einfügen eines Elementes, bei dem in einen bestehenden Baum eine neue Nutzlast eingefügt wird, wodurch sich der Baum i.A. verändert.$$\mathrm{Node}.\mathrm{insert}: \mathrm{Payload} \to \mathrm{Node}$$Beim Einfügen traversieren wir nicht, wie bei den meisten Bäumen und auch bei der Verwendung des Bottom-Up-Splaying-Ansatzes, den Baum hinunter, sondern splayen an dem Element, das wir einfügen wollen. Es kann sein, dass die Wurzel des gesplayten Baums schon das einzufügende Element ist, in diesem Fall sind wir fertig. Wir bezeichnen den Baum mit $B$ und das neue Element mit $k$. Ist der Baum noch leer, so brauchen wir nur die Wurzel setzen:$$\begin{aligned}B = \mathrm{Nil} &\Rightarrow B.\mathrm{insert}(k) = \mathrm{Node}(k, \mathrm{Nil}, \mathrm{Nil}) \\B.\mathrm{splay}(k) = \mathrm{Node}(k, x, y) &\Rightarrow B.\mathrm{insert}(k) = B.\mathrm{splay}(k)\end{aligned}$$Andernfalls ist die neue Wurzel genau das Element, das im ganzen Baum das nächstgrößere oder nächstkleinere als das einzufügende Element ist, oder formaler:$$B.\mathrm{splay}(k) = \mathrm{Node}(x, y, z) \Rightarrow x = k \lor x = \max(\{\kappa \in B: \kappa k\})$$In diesen beiden Fällen setzen wir das einzufügende Element $k$ als Wurzel. Im Fall, dass $k$ kleiner als die Wurzel des gesplayten existierenden Baums $B' = \mathrm{Node}(x, y, z)$ ist, sind trotzdem alle Elemente im linken Teilbaum $y$ von $B'$ kleiner als $k$ und wir setzen diesen als linken Teilbaum von $k$. Wir setzen die Wurzel $x$ von $B'$ als rechten Teilbaum, da sie, wie auch alle Elemente des rechten Teilbaums $z$ von $B'$ größer als $k$ sind. Dabei setzen wir den linken Teilbaum von $x$ auf $\mathrm{Nil}$.$$B.\mathrm{splay}(k) = \mathrm{Node}(x, y, z) \land k < x \Rightarrow B.\mathrm{insert}(k) = \mathrm{Node}(k, y, \mathrm{Node}(x, \mathrm{Nil}, z))$$oder graphisch:
# flake8: noqa B = Node("B", None, None); k1 = Node("k", None, None) y2 = Node("y", None, None); z2 = Node("z", None, None); x2 = Node("x", y2, z2); k2 = Node("k", None, None) y3 = Node("y", None, None); z3 = Node("z", None, None); x3 = Node("x", None, z3); k3 = Node("k", y3, x3) dot = k1.graph(B, Method("B.splay(k)"), k2, x2, Method("x.insert(k) where y < k < x"), k3) for subtree in B, y2, z2, y3, z3: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Im Fall, dass $k$ größer als die Wurzel $x$ des gesplayten Baums $B' = \mathrm{Node}(x, y, z)$ ist, sind umgekehrt alle Elemente in dessen rechten Teilbaum $z$ größer als $k$, und wir setzen die Wurzel dieses Baumes $x$ als linken Teilbaum des neuen Elements $k$, wobei dieser Knoten dann als linken Teilbaum $y$ hat, als rechten Teilbaum $\mathrm{Nil}$.$$B.\mathrm{splay}(k) = \mathrm{Node}(x, y, z) \land k > x \Rightarrow B.\mathrm{insert}(k) = \mathrm{Node}(k, \mathrm{Node}(x, y, \mathrm{Nil}), z)$$und graphisch:
# flake8: noqa B = Node("B", None, None); k1 = Node("k", None, None) y2 = Node("y", None, None); z2 = Node("z", None, None); x2 = Node("x", y2, z2); k2 = Node("k", None, None) y3 = Node("y", None, None); z3 = Node("z", None, None); x3 = Node("x", y3, None); k3 = Node("k", x3, z3) dot = B.graph(k1, Method("B.splay(k)"), x2, k2, Method("x.insert(k) where y < k < x"), k3) for subtree in B, y2, z2, y3, z3: dot.node(str(id(subtree)), shape="triangle") dot def insert(self, payload): self = self._splay(payload) if self._arb_eq(payload, self.payload): return self if self._arb_lt(payload, self.payload): tmp = self.left self.left = None return Node(payload, tmp, self) tmp = self.right self.right = None return Node(payload, self, tmp) Node.insert = insert del insert
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Für den leeren Baum brauchen wir nur das Element in einen neuen Knoten setzen. Wir definieren – unter anderem zur Betrachtung des Falls des leeren Baumes – die Klasse `SplayTree`, die das Management des leeren Baums wie auch der Tatsache, dass sich bei einem Splay die Wurzel ändert, nach außen vereinfacht.
class SplayTree: def __init__(self): self.tree = None def insert(self, payload): if self.tree is None: self.tree = Node(payload, None, None) else: self.tree = self.tree.insert(payload)
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Für den `SplayTree` definieren wir außerdem `graph`, um `Node.graph` zu exponieren.
def graph(self): if self.tree is None: return graphviz.Digraph() return self.tree.graph() SplayTree.graph = graph del graph
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Einige Beispiele zeigen das Einsetzen von Knoten in Splay Trees:
my_splay = SplayTree() my_splay.insert("a") my_splay.graph() my_splay.insert("c") my_splay.graph() my_splay.insert("b") my_splay.graph()
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
EntfernenWir definieren als nächstes das Entfernen eines Elementes, bei dem ebenfalls eine Nutzlast aus einem bestehenden Baum entfernt wird, wobei sich der Baum i.A. verändert.$$\mathrm{Node}.\mathrm{remove}: \mathrm{Payload} \to \mathrm{Node}$$Wir splayen wieder am zu entfernenden Element und haben dann einen Baum, der das zu entfernende Element als Wurzel hat. Sollte sich nach dem Splayen herausstellen, dass das Element nicht vorhanden ist, werfen wir einen `KeyError`, [da sich so auch die Mengen in Python verhalten](https://docs.python.org/3.7/library/stdtypes.htmlset "Python Software Foundation (2020): The Python Standard Library/Built-in Types/set, Python Documentation"). In der Definition schreiben wir $\downarrow$ für das Undefinierte. Dies tritt auch ein, wenn der Baum leer ist:$$\begin{aligned}B = \mathrm{Nil} &\Rightarrow B.\mathrm{remove}(k)\downarrow \\B.\mathrm{splay}(k) \neq \mathrm{Node}(k, x, y) &\Rightarrow B.\mathrm{remove}(k)\downarrow\end{aligned}$$Andernfalls überprüfen wir, ob ein Teilbaum des gesplayten Baums leer ist. In diesem Fall ist der neue Baum einfach der andere Teilbaum.$$\begin{aligned}B.\mathrm{splay}(k) = \mathrm{Node}(k, \mathrm{Nil}, x) &\Rightarrow B.\mathrm{remove}(k) = x \\B.\mathrm{splay}(k) = \mathrm{Node}(k, x, \mathrm{Nil}) &\Rightarrow B.\mathrm{remove}(k) = x\end{aligned}$$Ist das nicht der Fall, so sorgen wir dafür, dass der rechte Teilbaum des gesplayten Baums keinen linken Teilbaum mehr hat, sodass wir den linken Teilbaum des gesplayten Baums anfügen können. Eine Grafik illustriert, was gemeint ist:
# flake8: noqa B = Node("B", None, None) v1 = Node("v", None, None); w1 = Node("w", None, None); y1 = Node("y", None, None); x1 = Node("x", w1, y1); k1 = Node("k", v1, x1) v2 = Node("v", None, None); y2 = Node("y'", None, None); x2 = Node("x'", None, y2); k2 = Node("k", v2, x2) v3 = Node("v", None, None); y3 = Node("y'", None, None); x3 = Node("x'", v3, y3) dot = B.graph(Method("B.splay(k)"), k1, Method("x.splay(k)"), k2, Method("k.remove(k)"), x3) for subtree in B, v1, w1, y1, v2, y2, v3, y3: dot.node(str(id(subtree)), shape="triangle") dot
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir machen aus dem Baum $\mathrm{Node}(x, w, y)$ den Baum $\mathrm{Node}(x', \mathrm{Nil}, y')$, indem wir an $k$ splayen. Da das Minimum aus $\mathrm{Node}(x, w, y)$ größer $k$ ist, muss dann gerade dieses Minimum die Wurzel ($x'$) werden und kann keinen linken Teilbaum mehr haben.$$B.\mathrm{splay}(k) = \mathrm{Node}(k, v, \mathrm{Node}(x, w, y)) \land \mathrm{Node}(x, w, y).\mathrm{splay}(k) = \mathrm{Node}(x', \mathrm{Nil}, y') \Rightarrow B.\mathrm{remove}(k) = \mathrm{Node}(x', v, y')$$Für `Node` implementieren wir:
def remove(self, payload): self = self._splay(payload) if not self._arb_eq(payload, self.payload): return False, self if self.left is None: return True, self.right if self.right is None: return True, self.left tmp = self.left self = self.right._splay(payload) self.left = tmp return True, self Node.remove = remove del remove
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir werfen in der Implementierung erst auf Ebene von `SplayTree` den `KeyError`, um im Baum noch aufräumen zu können. Der Nutzer könnte ja im Falle eines `KeyError`s diesen auffangen wollen und die Menge trotzdem noch benutzen wollen.
def remove(self, payload): if self.tree is None: raise KeyError(payload) rc, self.tree = self.tree.remove(payload) if not rc: raise KeyError(payload) SplayTree.remove = remove del remove
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Einige Beispiele zeigen das Entfernen von Elementen.
my_splay = SplayTree() for letter in ["a", "c", "b"]: my_splay.insert(letter) my_splay.graph() my_splay.remove("b") my_splay.graph() my_splay.remove("a") my_splay.graph()
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
FindenWir definieren als nächstes das Überprüfen eines Baumes auf ein Element. In unserer Definition wird ein Tupel aus dem Vorhandensein und der neuen Wurzel zurückgegeben.$$\mathrm{Node}.\mathrm{contains}: \mathrm{Payload} \to \langle\mathbb{B}, \mathrm{Node}\rangle$$Wir splayen am gesuchten Element und können schon an der Wurzel erkennen, ob das Element vorhanden ist. Noch einfacher haben wir es, wenn keine Elemente im Baum sind:$$\begin{aligned}B = \mathrm{Nil} &\Rightarrow B.\mathrm{contains}(k) = (\mathrm{false}, \mathrm{Nil}) \\B.\mathrm{splay}(k) = \mathrm{Node}(k, y, z) &\Rightarrow B.\mathrm{contains}(k) = (\mathrm{true}, \mathrm{Node}(k, y, z)) \\B.\mathrm{splay}(k) = \mathrm{Node}(x, y, z) \land k \neq x &\Rightarrow B.\mathrm{contains}(k) = (\mathrm{false}, \mathrm{Node}(x, y, z))\end{aligned}$$Wir haben für `Node`
def contains(self, payload): self = self._splay(payload) return self._arb_eq(payload, self.payload), self Node.contains = contains del contains
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
und für den `SplayTree`
def contains(self, payload): if self.tree is None: return False contains, self.tree = self.tree.contains(payload) return contains SplayTree.contains = contains del contains
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Beispiele zeigen uns:
my_splay = SplayTree() for letter in ["a", "c", "b"]: my_splay.insert(letter) my_splay.graph() my_splay.contains("a") my_splay.graph() my_splay.contains("d") my_splay.graph()
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
LeerüberprüfungWir definieren zuletzt, wie wir überprüfen, ob der Baum leer ist.$$\mathrm{Node}.\mathrm{is\_empty}: \langle\rangle \to \mathbb{B}$$$$\mathrm{Node}.\mathrm{is\_empty}() = (\mathrm{Node} = \mathrm{Nil})$$Die Implementierung findet von `SplayTree` aus statt.
def is_empty(self): return self.tree is None SplayTree.is_empty = is_empty del is_empty my_splay = SplayTree() my_splay.is_empty() my_splay.insert("a") my_splay.is_empty()
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
DemonstrationAls kleine Demonstration von `SplayTree` implementieren wir damit die Ermittlung aller Primzahlen bis zu einer Zahl `n` mit dem *Sieb des Eratosthenes*:
def primes(n): primes = SplayTree() for i in range(2, n + 1): primes.insert(i) for i in range(2, n + 1): for j in range(2 * i, n + 1, i): try: primes.remove(j) except KeyError: pass return primes
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Da die Zahlen sequenziell betrachtet wurden, ist der Baum zunächst maximal unbalanciert:
tree = primes(25) tree.graph()
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Wir sehen aber zum Beispiel, dass schon durch einen Splay ein deutlich besser balancierter Baum entsteht:
tree.tree = tree.tree._splay(7) tree.graph()
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Da wir dieses Notebook noch wiederverwenden, entfernen wir irrelevante Namen aus dem Namespace.
del B, a1, a2, b1, b2, c1, c2, d1, d2, dot, e1, e2, f1, f2, g1, g2, h1, h2, \ j1, j2, k1, k2, k3, l1, l2, letter, m1, m2, my_splay, n1, n2, o1, o2, p1, \ p2, primes, q1, q2, r1, r2, s1, s2, subtree, t1, t2, tree, v1, v2, v3, \ w1, w2, x1, x2, x3, y1, y2, y3, z1, z2, z3
_____no_output_____
MIT
1-Splay-Trees.ipynb
jakobn-ai/SplayPy-Paper
Notebook to play with some graph forms...
### Standard Magic and startup initializers. import math import csv import numpy as np import random import itertools import matplotlib import matplotlib.pyplot as plt import pandas as pd %matplotlib inline matplotlib.style.use('seaborn-whitegrid') font = {'family' :'serif', 'size' : 22} matplotlib.rc('font', **font) ### Load Datafiles.. ## Load some stuff from Pickle.. import pickle ## Open an older file with open("./1000_stepping_run.pickle", 'rb') as input_file: results = pickle.load(input_file) ## Open an older file with open("./1000_stepping_run_uniform.pickle", 'rb') as input_file: uniform_results = pickle.load(input_file) # Print the keys. print("players, tasks, sample") for k,v in results.items(): print(k) df = pd.DataFrame(results) m = df.mean().unstack().T e = df.std().unstack().T print(m) print(e) #print(df.max()) uf = pd.DataFrame(uniform_results) mu = uf.mean().unstack().T eu = uf.std().unstack().T print(mu) print(eu) color_list = plt.cm.Paired(np.linspace(0, 1, 3)) color_list = color_list[:6] #a = m.plot(kind='line', yerr=e.values.T, marker='*',figsize=(10,8),linewidth=3, color=color_list) a = m.plot(kind='line', marker='*',figsize=(10,8),linewidth=3, color=color_list) handles, labels = a.get_legend_handles_labels() plt.legend(loc="upper left", bbox_to_anchor=[0, 1], shadow=True, title="Players", fancybox=True, handles=handles[::-1], labels=labels[::-1]) #a.legend(handles[::-1], labels[::-1]) mu.plot(kind='line', yerr=eu.values.T, marker='o', ax = a, color=color_list, linestyle='--',linewidth=3, legend=False) #mu.plot(kind='line', marker='o', ax = a, color=color_list, linestyle='--',linewidth=3, legend=False) a.set_yscale("log", nonposy='clip') a.set_xlim([0,80]) a.set_ylim([0,1000]) #plt.legend(bbox_to_anchor = (0,0.04,1,1), bbox_transform=plt.gcf().transFigure, loc='upper center', ncol=6, borderaxespad=0.) a.set_xlabel("Services Per Player ($|T_i|$)") a.set_ylabel("Solve Time (log(s))") plt.tight_layout() plt.savefig("test.pdf",bbox_inches='tight') plt.show()
2 5 10 5 0.006819 0.016678 0.033212 10 0.063348 0.178242 0.377655 30 2.620236 9.073796 23.524422 50 25.765830 89.238418 222.992741 70 107.386757 379.349311 787.160301 2 5 10 5 0.005216 0.010583 0.020155 10 0.034577 0.135762 0.244456 30 2.349703 10.141068 28.983025 50 35.372481 118.425861 311.230165 70 146.042514 769.699986 1004.326137 2 5 10 5 0.004272 0.010473 0.020113 10 0.029629 0.078207 0.162628 30 0.634193 2.217992 6.046969 50 3.111379 13.037105 46.203912 70 9.442468 40.573399 158.673905 2 5 10 5 0.001078 0.001731 0.003553 10 0.004977 0.011571 0.028786 30 0.093679 0.330223 0.887312 50 0.444567 1.804271 17.048166 70 1.382044 5.754932 62.828016
BSD-3-Clause
Graphing.ipynb
nmattei/InterdependentSchedulingGames
# Gives us a well defined version of tensorflow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass # will also work, but nightly build might contain surprises # !pip install -q tf-nightly-gpu-2.0-preview import tensorflow as tf print(tf.__version__) import matplotlib.pyplot as plt import pandas as pd import tensorflow as tf import numpy as np from tensorflow import keras import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/DJCordhose/ml-workshop/master/data/insurance-customers-1500.csv', sep=';') y = df['group'].values X = df.drop('group', axis='columns').values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
_____no_output_____
MIT
notebooks/tf2/tf-dense-insurance-reg.ipynb
kmunve/ml-workshop
An experimental approach:- keep adding regularization to make validation and train scores come closer to each other- this will come at the cost of train scores going down- if both values start going down you have gone too far- each experiment takes some time- for larger datasets and more complex models some people start by overfitting on a subsample of the data (because it trains much faster) - then you can be sure you have an architecture that at least has the capacity to solve the problem - then keep adding regularizations - eventually try using the complete data- if you want to use batch normalization place it between raw output of neuron and activation function
from tensorflow.keras.layers import Input, Dense, Dropout, \ BatchNormalization, Activation num_features = 3 num_categories = 3 dropout = 0.6 model = keras.Sequential() model.add(Input(name='input', shape=(num_features,))) # reduce capacity by decreasing number of neurons model.add(Dense(500, name='hidden1')) model.add(Activation('relu')) # model.add(BatchNormalization()) # model.add(Dropout(dropout)) model.add(Dense(500, name='hidden2')) model.add(Activation('relu')) # model.add(BatchNormalization()) # model.add(Dropout(dropout)) model.add(Dense(name='output', units=num_categories, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # reducing batch size might increase overfitting, # but might be necessary to reduce memory requirements BATCH_SIZE=1000 # reduce this based on what you see in the training history EPOCHS = 10000 model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_split=0.2, verbose=0) train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE) test_loss, test_accuracy plt.yscale('log') plt.ylabel("loss") plt.xlabel("epochs") plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.legend(["Loss", "Valdation Loss"]) plt.ylabel("accuracy") plt.xlabel("epochs") plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.legend(["Accuracy", "Valdation Accuracy"]) # category 1 should have the highest probability model.predict(np.array([[100, 48, 10]])) assert model.predict(np.array([[100, 48, 10]])).argmax() == 1
_____no_output_____
MIT
notebooks/tf2/tf-dense-insurance-reg.ipynb
kmunve/ml-workshop
Notebook TemplateThis Notebook is stubbed out with some project paths, loading of enviroment variables, and common package imports to speed up the process of starting a new project.It is highly recommended you copy and rename this notebook following the naming convention outlined in the readme of naming notebooks with a double number such as `01-first-thing`, and `02-next-thing`. This way the order of notebooks is apparent, and each notebook does not need to be needlesssly long, complex, and difficult to follow.
import importlib import os from pathlib import Path import sys from arcgis.features import GeoAccessor, GeoSeriesAccessor from arcgis.gis import GIS from dotenv import load_dotenv, find_dotenv import pandas as pd # import arcpy if available if importlib.util.find_spec("arcpy") is not None: import arcpy # paths to common data locations - NOTE: to convert any path to a raw string, simply use str(path_instance) dir_prj = Path.cwd().parent dir_data = dir_prj/'data' dir_raw = dir_data/'raw' dir_ext = dir_data/'external' dir_int = dir_data/'interim' dir_out = dir_data/'processed' gdb_raw = dir_raw/'raw.gdb' gdb_int = dir_int/'interim.gdb' gdb_out = dir_out/'processed.gdb' # import the project package from the project package path - only necessary if you are not using a unique environemnt for this project sys.path.append(str(dir_prj/'src')) import white_pass_feature_selection # load the "autoreload" extension so that code can change, & always reload modules so that as you change code in src, it gets loaded %load_ext autoreload %autoreload 2 # load environment variables from .env load_dotenv(find_dotenv()) # create a GIS object instance; if you did not enter any information here, it defaults to anonymous access to ArcGIS Online gis = GIS( url=os.getenv('ESRI_GIS_URL'), username=os.getenv('ESRI_GIS_USERNAME'), password=None if len(os.getenv('ESRI_GIS_PASSWORD')) is 0 else os.getenv('ESRI_GIS_PASSWORD') ) gis
_____no_output_____
Apache-2.0
notebooks/notebook-template.ipynb
knu2xs/white-pass-feature-selection
Benchmarking Annotation StorageClick to open in: [[GitHub](https://github.com/TissueImageAnalytics/tiatoolbox/tree/master/benchmarks/annotation_store.ipynb)][[Colab](https://colab.research.google.com/github/TissueImageAnalytics/tiatoolbox/blob/master/benchmarks/annotation_store.ipynb)][[Kaggle](https://kaggle.com/kernels/welcome?src=https://github.com/TissueImageAnalytics/tiatoolbox/blob/master/benchmarks/annotation_store.ipynb)] _In order to run this notebook on a Kaggle platform, 1) click the Kaggle URL 2) click on Settings on the right of the Kaggle screen, 3) log in to your Kaggle account, 4) tick "Internet" checkbox under Settings, to enable necessary downloads._**NOTE:** Some parts of this notebook require a lot of memory. Part 2 in particular may not run on memory constrained systems. The notebook will run well on an MacBook Air (M1, 2020) but will use a lot of swap. It may require >64GB of memory for second half to avoid using swap. About This NotebookManaging annotation, either created by hand or from model output, is acommon task in computational pathology. For a small number ofannotations this may be trivial. However, for large numbers ofannotations, it is often necessary to store the annotations in a morestructured format such as a database. This is because finding a desiredsubset of annotations within a very large collection, for example overone million cell boundary polygons derived from running HoVerNet on aWSI, may be very slow if performed in a naive manner. In the toolbox, weimplement two storage method to make handling annotations easier:`DictionaryStore` and `SQLiteStore`. Storage ClassesBoth stores act as a key-value store where the key is the annotation ID(as a string) and the value is the annotation. This follows the Python[`MutableMapping`](https://docs.python.org/3/library/collections.abc.htmlcollections.abc.MutableMapping)interface meaning that the stores can be used in the same way as aregular Python dictionary (`dict`).The `DictionaryStore` is implemented internally using a Pythondictionary. It is a realtively simple class, operating with allannotations in memory and using a simple scan method to search forannotations. This works very well for a small number of annotations. Incontrast the `SQLiteStore` is implemented using a SQLite database(either in memory or on disk), it is a more complex class making use ofan rtree index to efficiently spatially search for annotations. This ismuch more suited to a very large number of annotations. However, theyboth follow the same interface and can be used interchangeably foralmost all methods (`SQLiteStore` has some additional methods). Provided Functionality (Mini Tutorial)The storage classes provide a lot of functionality including. Thisincludes all of the standard `MutableMapping` methods, as well assome additional ones for querying the collection of annotations.Below is a brief summary of the main functionality. Adding Annotations```pythonfrom tiatoolbox.annotation.storage import Annotation, DictionaryStore, SQliteStorefrom shapely.geometry import Polygon Create a new store. If no path is given it is an in-memory store.store = DictionaryStore() An annotation is a shapely geometry and a JSON serializable dictionaryannotation = Annotation(Polygon.from_bounds(0, 0, 1, 1), {'id': '1'}) Add the annotation to the store in the same way as a dictionarystore["foo"] = annotation Bulk append is also supported. This will be faster in some contexts (e.g. for an SQLiteStore) than adding them one at a time. Here we add 100 simple box annotations. As we have not specified a set of keys to use, a new UUID is generated for each. The respective generated keys are also returned.annotations = [ Annotation(Polygon.from_bounds(n, n, n+1, n+1), {'id': n}) for n in range(100)]keys = store.append_many(annotations)``` Removing Annotations```python Remove an annotation by keydel store["foo"] Bulk removalkeys = ["1234-5676....", "..."] etc.store.remove_many(keys)``` Querying Within a Region```python Find all annotations which intersect a polygonsearch_region = Polygon.from_bounds(0, 0, 10, 10)result = store.query(search_region) Find all annotations which are contained within a polygonsearch_region = Polygon.from_bounds(0, 0, 10, 10)result = store.query(search_region, geometry_predicate="contains")``` Querying Using A Predicate Statement```python 'props' is a provided shorthand to access the 'properties' dictionaryresults = store.query(where="propd['id'] == 1")``` Serializing and Deserializing```python Serialize the store to a GeoJSON stringjson_string = store.to_geojson() Serialize the store to a GeoJSON filestore.to_geojson("boxes.geojson") Deserialize a GeoJSON string into a store (even of a different type)sqlitestore = SqliteStore.from_geojson("boxes.geojson") The above is an in-memory store. We can also now write this to disk as an SQLite database.sqlitestore.dump("boxes.db")``` BenchmarkingHere we evaluate the storage efficient and data querying performance ofthe annotation store versus other common formats. We will evaluate somecommon situations and use cases including:- Disk I/O (tested with an SSD)- Querying the data for annotations within a box region- Querying the data for annotations within a polygon region- Querying the data with a predicate e.g. 'class=1'All saved output is from running this notebook on a 2020 M1 MacBook Air with 16GB RAM. Imports
import sys sys.path.append("..") # If running locally without pypi installed tiatoolbox import copy import pickle import tempfile import timeit import uuid from numbers import Number from pathlib import Path from typing import Generator, List, Optional, Tuple import numpy as np from IPython.display import display from matplotlib import pyplot as plt from shapely.geometry import MultiPolygon, Point, Polygon from tqdm.auto import tqdm from tiatoolbox.annotation.storage import Annotation plt.style.use("ggplot") from tiatoolbox.annotation.storage import Annotation, DictionaryStore, SQLiteStore
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Data Generation & Utility FunctionsHere we define some useful functions to generate some artificial dataand visualise results.
def cell_polygon( xy: Tuple[Number, Number], n_points: int = 20, radius: Number = 8, noise: Number = 0.01, eccentricity: Tuple[Number, Number] = (1, 3), repeat_first: bool = True, direction: str = "CCW", seed:int = 0, ) -> Polygon: """Generate a fake cell boundary polygon. Borrowed from tiatoolbox unit tests. Cell boundaries are generated an ellipsoids with randomised eccentricity, added noise, and a random rotation. Args: xy (tuple(int)): The x,y centre point to generate the cell boundary around. n_points (int): Number of points in the boundary. Defaults to 20. radius (float): Radius of the points from the centre. Defaults to 10. noise (float): Noise to add to the point locations. Defaults to 1. eccentricity (tuple(float)): Range of values (low, high) to use for randomised eccentricity. Defaults to (1, 3). repeat_first (bool): Enforce that the last point is equal to the first. direction (str): Ordering of the points. Defaults to "CCW". Valid options are: counter-clockwise "CCW", and clockwise "CW". seed: Seed for the random number generator. Defaults to 0. """ from shapely import affinity rand_state = np.random.get_state() np.random.seed(seed) if repeat_first: n_points -= 1 # Generate points about an ellipse with random eccentricity x, y = xy alpha = np.linspace(0, 2 * np.pi - (2 * np.pi / n_points), n_points) rx = radius * (np.random.rand() + 0.5) ry = np.random.uniform(*eccentricity) * radius - 0.5 * rx x = rx * np.cos(alpha) + x + (np.random.rand(n_points) - 0.5) * noise y = ry * np.sin(alpha) + y + (np.random.rand(n_points) - 0.5) * noise boundary_coords = np.stack([x, y], axis=1).astype(int).tolist() # Copy first coordinate to the end if required if repeat_first: boundary_coords = boundary_coords + [boundary_coords[0]] # Swap direction if direction.strip().lower() == "cw": boundary_coords = boundary_coords[::-1] polygon = Polygon(boundary_coords) # Add random rotation angle = np.random.rand() * 360 polygon = affinity.rotate(polygon, angle, origin="centroid") # Restore the random state np.random.set_state(rand_state) return polygon def cell_grid( size: Tuple[int, int] = (10, 10), spacing: Number = 25 ) -> Generator[Polygon, None, None]: """Generate a grid of cell boundaries.""" return ( cell_polygon(xy=np.multiply(ij, spacing), repeat_first=False, seed=n) for n, ij in enumerate(np.ndindex(size)) ) def plot_results( experiments: List[List[Number]], title: str, capsize=5, **kwargs ) -> None: """Plot the results of a benchmark. Uses the min for the bar height (see See https://docs.python.org/2/library/timeit.html#timeit.Timer.repeat), and plots a min-max error bar. """ import matplotlib.patheffects as PathEffects x = range(len(experiments)) color = [f"C{x_i}" for x_i in x] plt.bar( x=x, height=[min(e) for e in experiments], color=color, yerr=[[0 for e in experiments], [max(e) - min(e) for e in experiments]], capsize=capsize, **kwargs, ) for i, (runs, c) in enumerate(zip(experiments, color)): plt.text( i, min(runs), f" {min(runs):.4f}s", ha="left", va="bottom", color=c, zorder=10, fontweight="bold", path_effects=[ PathEffects.withStroke(linewidth=2, foreground="w"), ], ) plt.title(title) plt.hlines( 0.5, -0.5, len(experiments) - 0.5, linestyles="dashed", colors="black", alpha=0.5, ) plt.yscale("log") plt.xlabel("Store Type") plt.ylabel("Time (s)")
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Display Some Generated Data
for n in range(4): display(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Randomised Cell BoundariesHere we create a function to generate grid of cells for testing. It uses a fixed seed for reproducibility. A Sample 5×5 Grid
from shapely.geometry import MultiPolygon MultiPolygon(polygons=list(cell_grid(size=(5, 5), spacing=35)))
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Part 1: Small Scale Benchmarking of Annotation StorageUsing the already defined data generation functions (`cell_polygon` and`cell_grid`), we create some simple artificial cell boundaries bycreating a circle of points, adding some noise, scaling to introduceeccentricity, and then rotating. We use 20 points per cell, which is areasonably high value for cell annotation. However, this can beadjusted. 1.1) Appending Annotations (In-Memory & Disk I/O)Here we test:1. A python dictionary based in-memory store (`DictionaryStore`)2. An SQLite database based in-memory store (`SQLiteStore`)Both of these stores may operate in memory. The `SQLiteStore` may alsobe backed by an on-disk file for datasets which are too large to fit inmemory. The `DictionaryStore` class can serialise/deserialise itselfto/from disk in a line delimited GeoJSON format (each line seperatedby `\n` is a valid GeoJSON object)
# Convert to annotations (a dataclass pairing a geometry and (optional) # key-value properties) # Run time: ~2s annotations = [ Annotation(polygon) for polygon in cell_grid(size=(100, 100), spacing=35) ]
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
1.1.1) In Memory Append
# Run time: ~5s # Time dictionary store dict_runs = timeit.repeat( "dict_store.append_many(annotations)", setup="dict_store = DictionaryStore()", globals={"DictionaryStore": DictionaryStore, "annotations": annotations}, number=1, repeat=3, ) # Time SQLite store sqlite_runs = timeit.repeat( "sql_store.append_many(annotations)", setup="sql_store = SQLiteStore()", globals={"SQLiteStore": SQLiteStore, "annotations": annotations}, number=1, repeat=3, ) # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="Time to Append 10,000 Annotations In Memory", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.hlines(0.5, -0.5, 1.5, linestyles="dashed", color="k") plt.xlim([-0.5, 1.5]) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Note that inserting into the `SQLiteStore` is much slower than the`DictionaryStore`. Appending to a `Dictionary` store simply requiresadding a memory reference to a dictionary. Therefore, this is a veryfast operation. On the other hand, for the `SQLiteStore`, the insertionis slower because the data must be serialised for the database and theR-Tree spatial index must also be updated. Updating the index is arelatively expensive operation. However, this spatial index allows forvery fast queries of a very large set of annotations within a set ofspatial bounds.Insertion is typically only performed once for eachannotation, whereas queries may be performed many times on theannotation set. Therefore, it makes sense to trade a more expensiveinsertion for fast queries as the cost of insertion will be amortisedover a number of queries on the data. Additionally, data may be writtento the database from multiple threads or subprocesses (so long as a newinstance of `SQLiteStore` is created for each thread or subprocess toattach to a database on disk) thus freeing up the main thread.For comparison, we also compare bulk insertion plus seralising to diskas line-delimited GeoJSON from the `DictionaryStore` as this is thedefault serialisation to disk method (`DictionaryStore.dump(file_path`).
# Run time: ~10s setup = "fp.truncate(0)\n" "store = Store(fp)" # Clear the file # Time dictionary store with tempfile.NamedTemporaryFile("w+") as fp: dict_runs = timeit.repeat( ("store.append_many(annotations)\n" "store.commit()"), setup=setup, globals={"Store": DictionaryStore, "annotations": annotations, "fp": fp}, number=1, repeat=3, ) # Time SQLite store with tempfile.NamedTemporaryFile("w+b") as fp: sqlite_runs = timeit.repeat( ("store.append_many(annotations)\n" "store.commit()"), setup=setup, globals={"Store": SQLiteStore, "annotations": annotations, "fp": fp}, number=1, repeat=3, ) # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="Time to Append & Serialise 10,000 Annotations To Disk", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.hlines(0.5, -0.5, 1.5, linestyles="dashed", color="k") plt.xlim([-0.5, 1.5]) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Here we can see that when we include the serialisation to disk in thebenchmark, the time to insert is much more similar. 1.2) Box Query
# Run time: ~20s # One time Setup dict_store = DictionaryStore() sql_store = SQLiteStore() dict_store.append_many(annotations) sql_store.append_many(annotations) np.random.seed(123) boxes = [ Polygon.from_bounds(x, y, 128, 128) for x, y in np.random.randint(0, 1000, size=(100, 2)) ] stmt = "for box in boxes:\n" " _ = store.query(box)" # Time dictionary store dict_runs = timeit.repeat( stmt, globals={"store": dict_store, "boxes": boxes}, number=1, repeat=10, ) # Time SQLite store sqlite_runs = timeit.repeat( stmt, globals={"store": sql_store, "boxes": boxes}, number=1, repeat=10, ) # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="100 Box Queries", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Here we can see that the `SQLiteStore` is a bit faster. Addtionally,difference in performance is more pronounced when there are moreannotations (as we will see later in this notebook) in the store or whenjust returning keys:
# Run time: ~15s # One time Setup dict_store = DictionaryStore() sql_store = SQLiteStore() dict_store.append_many(annotations) sql_store.append_many(annotations) np.random.seed(123) boxes = [ Polygon.from_bounds(x, y, 128, 128) for x, y in np.random.randint(0, 1000, size=(100, 2)) ] stmt = "for box in boxes:\n" " _ = store.iquery(box)" # Just return the keys (uuids) # Time dictionary store dict_runs = timeit.repeat( stmt, globals={"store": dict_store, "boxes": boxes}, number=1, repeat=10, ) # Time SQLite store sqlite_runs = timeit.repeat( stmt, globals={"store": sql_store, "boxes": boxes}, number=1, repeat=10, ) # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="100 Box Queries (Key Lookup Only)", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
1.3) Polygon Query
# Run time: ~15s # One time Setup dict_store = DictionaryStore() sql_store = SQLiteStore() dict_store.append_many(annotations) sql_store.append_many(annotations) np.random.seed(123) query_polygons = [ Polygon( [ (x, y), (x + 128, y), (x + 128, y + 128), (x, y), ] ) for x, y in np.random.randint(0, 1000, size=(100, 2)) ] stmt = "for polygon in query_polygons:\n" " _ = store.query(polygon)" # Time dictionary store dict_runs = timeit.repeat( stmt, globals={"store": dict_store, "query_polygons": query_polygons}, number=1, repeat=10, ) # Time SQLite store sqlite_runs = timeit.repeat( stmt, globals={"store": sql_store, "query_polygons": query_polygons}, number=1, repeat=10, ) # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="100 Polygon Queries", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Here we can see that performing queries within a polygon region is about10x faster with the `SQLiteStore` than with the `DictionaryStore`. 1.4) Predicate QueryHere we query the whole annotation region but with a predicate toselect only annotations with the class label of 0. We also,demonstrate how creating a database index can dramatically improvethe performance of queries.
# Run time: ~2m # Setup labelled_annotations = copy.deepcopy(annotations) for n, annotation in enumerate(labelled_annotations): annotation.properties["class"] = n % 10 annotation.properties["vector"] = np.random.randint(1, 4, 10).tolist() predicate = "(props['class'] == ?) & (3 in props['vector'])" classes = np.random.randint(0, 10, size=100) stmt = "for n in classes:\n" " store.query(where=predicate.replace('?', str(n)))" dict_store = DictionaryStore() sql_store = SQLiteStore() dict_store.append_many(labelled_annotations) sql_store.append_many(labelled_annotations) # Time dictionary store dict_runs = timeit.repeat( stmt, globals={"store": dict_store, "predicate": predicate, "classes": classes}, number=1, repeat=10, ) dict_result = dict_store.query(where=predicate.replace("?", "0")) # Time SQLite store sqlite_runs = timeit.repeat( stmt, globals={"store": sql_store, "predicate": predicate, "classes": classes}, number=1, repeat=10, ) sql_result = sql_store.query(where=predicate.replace("?", "0")) # Add an index # Note: Indexes may not always speed up the query (sometimes they can # actually slow it down), test to make sure. sql_store.create_index("class_lookup", "props['class']") sql_store.create_index("has_3", "3 in props['vector']") # Time SQLite store again sqlite_index_runs = timeit.repeat( stmt, globals={"store": sql_store, "predicate": predicate, "classes": classes}, number=1, repeat=10, ) sql_index_result = sql_store.query(where=predicate.replace("?", "0")) # # Validate the results against each other # for a, b, c in zip(dict_result, sql_result, sql_index_result): # assert a.geometry == b.geometry == c.geometry # Plot the results plot_results( experiments=[dict_runs, sqlite_runs, sqlite_index_runs], title="100 Queries with a Predicate", tick_label=["DictionaryStore", "SQLiteStore", "SQLiteStore\n(with index)"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Polygon & Predicate Query
# Run time: ~10s # Setup labelled_annotations = copy.deepcopy(annotations) for n, annotation in enumerate(labelled_annotations): annotation.properties["class"] = n % 10 predicate = "props['class'] == " classes = np.random.randint(0, 10, size=50) query_polygons = [ Polygon( [ (x, y), (x + 128, y), (x + 128, y + 128), (x, y), ] ) for x, y in np.random.randint(0, 1000, size=(100, 2)) ] stmt = ( "for n, poly in zip(classes, query_polygons):\n" " store.query(poly, where=predicate + str(n))" ) dict_store = DictionaryStore() sql_store = SQLiteStore() dict_store.append_many(labelled_annotations) sql_store.append_many(labelled_annotations) # Time dictionary store dict_runs = timeit.repeat( stmt, globals={ "store": dict_store, "predicate": predicate, "classes": classes, "query_polygons": query_polygons, }, number=1, repeat=10, ) dict_result = dict_store.query(query_polygons[0], where=predicate + "0") # Time SQLite store sqlite_runs = timeit.repeat( stmt, globals={ "store": sql_store, "predicate": predicate, "classes": classes, "query_polygons": query_polygons, }, number=1, repeat=10, ) sql_result = sql_store.query(query_polygons[0], where=predicate + "0") # Check that the set difference of bounding boxes is empty i.e. all sets # of results contain polygons which produce the same set of bounding # boxes. This avoids being tripped up by slight varations in order or # coordinate order between the results. dict_set = set(x.geometry.bounds for x in dict_result) sql_set = set(x.geometry.bounds for x in sql_result) assert len(dict_set.difference(sql_set)) == 0 # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="100 Queries with a Polygon and Predicate", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Complex Predicate QueryHere we slightly increase the complexity of the predicate to show howthe complexity of a predicate can dramatically affect the performancewhen handling many annotations.
# Run time: ~1m # Setup box = Polygon.from_bounds(0, 0, 1024, 1024) labelled_annotations = copy.deepcopy(annotations) for n, annotation in enumerate(labelled_annotations): annotation.properties["class"] = n % 4 annotation.properties["n"] = n predicate = "(props['n'] > 1000) & (props['n'] % 4 == 0) & (props['class'] == " targets = np.random.randint(0, 4, size=100) stmt = "for n in targets:\n" " store.query(box, where=predicate + str(n) + ')')" dict_store = DictionaryStore() sql_store = SQLiteStore() dict_store.append_many(labelled_annotations) sql_store.append_many(labelled_annotations) # Time dictionary store dict_runs = timeit.repeat( stmt, globals={ "store": dict_store, "predicate": predicate, "targets": targets, "box": box, }, number=1, repeat=10, ) dict_result = dict_store.query(box, where=predicate + "0)") # Time SQLite store sqlite_runs = timeit.repeat( stmt, globals={ "store": sql_store, "predicate": predicate, "targets": targets, "box": box, }, number=1, repeat=10, ) sql_result = sql_store.query(box, where=predicate + "0)") # Check that the set difference of bounding boxes is empty i.e. all sets # of results contain polygons which produce the same set of bounding # boxes. This avoids being tripped up by slight varations in order or # coordinate order between the results. dict_set = set(x.geometry.bounds for x in dict_result.values()) sql_set = set(x.geometry.bounds for x in sql_result.values()) assert len(dict_set.difference(sql_set)) == 0 # Plot the results plot_results( experiments=[dict_runs, sqlite_runs], title="100 Queries with a Complex Predicate", tick_label=["DictionaryStore", "SQLiteStore"], ) plt.show()
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox
Part 2: Large Scale Dataset BenchmarkingHere we generate some sets of anntations with five million items each(in a 2237 x 2237 grid). One is a set of points, the other a set ofgenerated cell boundaries.The code to generate and write out the annotations to various formats isincluded in the following cells. However, some of these take a very longtime to run. A pre-generated dataset is downloaded and then read fromdisk instead to save time. However, you may uncomment the generationcode to replicate the original. 2.1) Points DatasetHere we generate a simple points data in a grid. The grid is 2237 x 2237and contains over 5 million points. We also write this to disk invarious formats. Some formats take a long time and are commented out. Asummary of times for a consumer laptop are shown in a table at the end.
# Generate some points with a little noise # Run time: ~5s points = np.array( [[x, y] for x in np.linspace(0, 75_000, 2237) for y in np.linspace(0, 75_000, 2237)] ) # Add some noise between -1 and 1 np.random.seed(42) points += np.random.uniform(-1, 1, size=(2237**2, 2))
_____no_output_____
BSD-3-Clause
benchmarks/annotation_store.ipynb
adamshephard/tiatoolbox